Author:
Paluzo-Hidalgo Eduardo, ,Gonzalez-Diaz Rocio,Aguirre-Carrazana Guillermo
Abstract
<abstract><p>The automatic recognition of a person's emotional state has become a very active research field that involves scientists specialized in different areas such as artificial intelligence, computer vision, or psychology, among others. Our main objective in this work is to develop a novel approach, using persistent entropy and neural networks as main tools, to recognise and classify emotions from talking-face videos. Specifically, we combine audio-signal and image-sequence information to compute a <italic>topology signature</italic> (a 9-dimensional vector) for each video. We prove that small changes in the video produce small changes in the signature, ensuring the stability of the method. These topological signatures are used to feed a neural network to distinguish between the following emotions: calm, happy, sad, angry, fearful, disgust, and surprised. The results reached are promising and competitive, beating the performances achieved in other state-of-the-art works found in the literature.</p></abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Reference45 articles.
1. E. Ertay, H. Huang, Z. Sarsenbayeva, T. Dingler, Challenges of emotion detection using facial expressions and emotion visualisation in remote communication, in Processing of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Academic Press, (2021), 230–236. https://doi.org/10.1145/3460418.3479341
2. B. Sun, S. Cao, D. Li, J. He, Dynamic micro-expression recognition using knowledge distillation, IEEE Trans. Affect. Comput., (2020), In press. https://doi.org/10.1109/TAFFC.2020.2986962
3. J. Gou, B. Yu, S. J. Maybank, D. Tao, Knowledge distillation: A survey, Int. J. Comput. Vis., 129 (2021), 1789–1819. https://doi.org/10.1007/s11263-021-01453-z
4. I. Ofodile, K. Kulkarni, C. A. Corneanu, S. Escalera, X. Baro, S. Hyniewska, et al., Automatic recognition of deceptive facial expressions of emotion, Comput. Sci., 2017. https://arXiv.org/abs/1707.04061.
5. S. Shojaeilangari, W. Y. Yau, E. K. Teoh, Pose-invariant descriptor for facial emotion recognition, Mach. Vis. Appl., 27 (2016), 1063–1070. https://doi.org/10.1007/s00138-016-0794-2