AM3F-FlowNet: Attention-Based Multi-Scale Multi-Branch Flow Network
Author:
Fu Chenghao1, Yang Wenzhong12, Chen Danny1, Wei Fuyuan1
Affiliation:
1. School of Information Science and Engineering, Xinjiang University, Urumqi 830017, China 2. Xinjiang Key Laboratory of Multilingual Information Technology, Xinjiang University, Urumqi 830017, China
Abstract
Micro-expressions are the small, brief facial expression changes that humans momentarily show during emotional experiences, and their data annotation is complicated, which leads to the scarcity of micro-expression data. To extract salient and distinguishing features from a limited dataset, we propose an attention-based multi-scale, multi-modal, multi-branch flow network to thoroughly learn the motion information of micro-expressions by exploiting the attention mechanism and the complementary properties between different optical flow information. First, we extract optical flow information (horizontal optical flow, vertical optical flow, and optical strain) based on the onset and apex frames of micro-expression videos, and each branch learns one kind of optical flow information separately. Second, we propose a multi-scale fusion module to extract more prosperous and more stable feature expressions using spatial attention to focus on locally important information at each scale. Then, we design a multi-optical flow feature reweighting module to adaptively select features for each optical flow separately by channel attention. Finally, to better integrate the information of the three branches and to alleviate the problem of uneven distribution of micro-expression samples, we introduce a logarithmically adjusted prior knowledge weighting loss. This loss function weights the prediction scores of samples from different categories to mitigate the negative impact of category imbalance during the classification process. The effectiveness of the proposed model is demonstrated through extensive experiments and feature visualization on three benchmark datasets (CASMEII, SAMM, and SMIC), and its performance is comparable to that of state-of-the-art methods.
Funder
Natural Science Foundation of China Autonomous Region Science and Technology Program
Subject
General Physics and Astronomy
Reference41 articles.
1. Effects of the duration of expressions on the recognition of microexpressions;Shen;J. Zhejiang Univ. Sci. B,2012 2. Deep learning for micro-expression recognition: A survey;Li;IEEE Trans. Affect. Comput.,2022 3. Thi Thu Nguyen, N., Thi Thu Nguyen, D., and The Pham, B. (2021, January 29–31). Micro-expression recognition based on the fusion between optical flow and dynamic image. Proceedings of the 2021 the 5th International Conference on Machine Learning and Soft Computing, Da Nang, Vietnam. 4. Liong, S.T., Phan, R.C.W., See, J., Oh, Y.H., and Wong, K. (2014, January 1–4). Optical strain based recognition of subtle emotions. Proceedings of the 2014 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Kuching, Sarawak, Malaysia. 5. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7–12). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.
|
|