Affiliation:
1. Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
2. Department of Computer Engineering and Information, College of Engineering, Wadi Ad Dwaser, Prince Sattam Bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia
3. Information technology Department, Faculty of Technological Industry and Energy, Delta Technological Unversity, Quesna 32631, Egypt
Abstract
The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This study presents a simple CCNN-SVM automated model as a viable approach for FER. The model combines a Convolutional Neural Network for feature extraction, certain image preprocessing techniques, and Support Vector Machine (SVM) for classification. Firstly, the input image is preprocessed using face detection, histogram equalization, gamma correction, and resizing techniques. Secondly, the images go through custom single Deep Convolutional Neural Networks (CCNN) to extract deep features. Finally, SVM uses the generated features to perform the classification. The suggested model was trained and tested on four datasets, CK+, JAFFE, KDEF, and FER. These datasets consist of seven primary emotional categories, which encompass anger, disgust, fear, happiness, sadness, surprise, and neutrality for CK+, and include contempt for JAFFE. The model put forward demonstrates commendable performance in comparison to existing facial expression recognition techniques. It achieves an impressive accuracy of 99.3% on the CK+ dataset, 98.4% on the JAFFE dataset, 87.18% on the KDEF dataset, and 88.7% on the FER.
Funder
Deanship of Scientific Research, Prince Sattam bin Abdulaziz University