Abstract
Facial expression recognition(FER) is a hot topic in computer vision, especially as deep learning based methods are gaining traction in this field. However, traditional convolutional neural networks (CNN) ignore the relative position relationship of key facial features (mouth, eyebrows, eyes, etc.) due to changes of facial expressions in real-world environments such as rotation, displacement or partial occlusion. In addition, most of the works in the literature do not take visual tempos into account when recognizing facial expressions that possess higher similarities. To address these issues, we propose a visual tempos 3D-CapsNet framework(VT-3DCapsNet). First, we propose 3D-CapsNet model for emotion recognition, in which we introduced improved 3D-ResNet architecture that integrated with AU-perceived attention module to enhance the ability of feature representation of capsule network, through expressing deeper hierarchical spatiotemporal features and extracting latent information (position, size, orientation) in key facial areas. Furthermore, we propose the temporal pyramid network(TPN)-based expression recognition module(TPN-ERM), which can learn high-level facial motion features from video frames to model differences in visual tempos, further improving the recognition accuracy of 3D-CapsNet. Extensive experiments are conducted on extended Kohn-Kanada (CK+) database and Acted Facial Expression in Wild (AFEW) database. The results demonstrate competitive performance of our approach compared with other state-of-the-art methods.
Funder
National Key Technologies Research and Development Program of China
National Social Science Foundation of China
Publisher
Public Library of Science (PLoS)
Reference66 articles.
1. Deep Facial Expression Recognition: A Survey;L Shan;IEEE Transactions on Affective Computing,2018
2. A new multi-scale convolutional model based on multiple attention for image classification;Y Yang;Applied Sciences,2019
3. Attention-based BiGRU-CNN for Chinese question classification;J Liu;Journal of Ambient Intelligence and Humanized Computing,2019
4. Multi-layer transformer aggregation encoder for answer generation;S Shang;IEEE Access,2020
5. Lightweight dense video captioning with cross-modal attention and knowledge-enhanced unbiased scene graph;S Han;Complex & Intelligent Systems,2023