Affiliation:
1. Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
2. School of Psychology, Tel Aviv University, Tel Aviv 39040, Israel
Abstract
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants’ emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.
Reference46 articles.
1. Šimić, G., Tkalčić, M., Vukić, V., Mulc, D., Španić, E., Šagud, M., Olucha-Bordonau, F.E., Vukšić, M., and Hof, P.R. (2021). Understanding Emotions: Origins and Roles of the Amygdala. Biomolecules, 11.
2. Deep learning-based facial emotion recognition for human–computer interaction applications;Chowdary;Neural Comput. Appl.,2021
3. A survey on facial emotion recognition techniques: A state-of-the-art literature review;Canal;Inf. Sci.,2022
4. Deep Facial Expression Recognition: A Survey;Li;IEEE Trans. Affect. Comput.,2022
5. Speech Emotion Recognition: A Comprehensive Survey;Jawad;Wirel. Pers. Commun.,2023