Utilizing Multi-modal Bio-sensing Toward Affective Computing in Real-world Scenarios
- Author(s): Siddharth, Siddharth
- Advisor(s): Trivedi, Mohan
- Jung, Tzyy-Ping
- et al.
Recognition and continuous monitoring of human emotions is a key problem that is spread across multiple research disciplines like electrical engineering, computer science, neuroscience, and cognitive science. Instead of the widely used method of approaching this problem from the perspective of computer vision i.e. by tracking user's facial expressions, we take a multi-modal approach. This approach utilizes various bio-sensing modalities in addition to computer vision for assessing and classifying human affects. In the process, we address various limitations—hardware and software—of such bio-sensing modalities. We evaluate the developed framework on real-world applications ranging from watching emotional multimedia content (such as videos) to driving an automobile. Our holistic framework contributes toward (1) the development of a novel multi-modal bio-sensing headset that is capable of recording, monitoring, and tracking various bio-signals in real-time, (2) the development of algorithms that utilize signal processing as well as deep learning tools for various bio-sensor and vision-based modalities for emotion recognition, (3) the utilization of such algorithms in a wide range of applications namely emotion classification, studying emotion elicitation, and monitoring driver's awareness, and (4) the performance comparison of various sensor modalities for the above applications when they are used individually or in fusion with each other. Although the above multi-modal sensor platform has been evaluated in this thesis dissertation on affective computing applications only, it is generalizable. Thus, this platform can work in various applications in the field of human-computer interaction that requires the use of bio-sensing modalities.