Affiliation:
1. School of Software Hunan Vocational College Of Science and Technology Changsha China
2. School of Computer Science and Engineering Central South University Changsha China
Abstract
AbstractRecent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model‐based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single‐modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single‐modal approaches.
Reference15 articles.
1. Multi-cue fusion for emotion recognition in the wild
2. Speech Emotion Recognition Using Deep Feedforward Neural Network
3. Voice keyword retrieval method using attention mechanism and multimodal information fusion [J];Zhang H;Scientific Programming,2021
4. Face and emotion recognition under complex illumination conditions using deep learning with morphological processing [J];Kshirsagar P;Journal of Interdisciplinary Cycle Research,2021
5. Multi-scale discrepancy adversarial network for crosscorpus speech emotion recognition