Author:
Jajan Khalid Ibrahim Khalaf,Abdulazeez Prof. Dr. Eng. Adnan Mohsin
Abstract
This review paper provides a comprehensive analysis of recent advancements in Facial Expression Recognition (FER) through various deep learning models. Seven state-of-the-art models are scrutinized, each offering unique contributions to the field. The MBCC-CNN model demonstrates improved recognition rates on diverse datasets, addressing the challenges of facial expression recognition through multiple branches and cross-connected convolutional neural networks. The Deep Graph Fusion model introduces a novel approach for predicting viewer expressions from videos, showcasing superior performance on the EEV database. Multimodal emotion recognition is explored in the EEG and facial expression fusion model, achieving high accuracy on the DEAP dataset. The Spark-based LDSP-TOP descriptor, coupled with a 1-D CNN and LSTM Autoencoder, excels in capturing temporal dynamics for facial expression understanding. Vision transformers for micro-expression recognition exhibit outstanding accuracy on datasets like CASMEI, CASME-II, and SAMM. Additionally, a hierarchical deep learning model is proposed for evaluating teaching states based on facial expressions. Lastly, a visionary transformer model achieves remarkable recognition accuracy of 100% on SAMM dataset, showcasing the potential of combining convolutional and transformer architectures. This review synthesizes key findings, highlights model performances, and outlines directions for future research in FER.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Exploring the Interplay between Facial Expression Recognition and Physical States;Proceedings of the XXIV International Conference on Human Computer Interaction;2024-06-19