Author:
Chen Shouyan,Zhang Mingyan,Yang Xiaofen,Zhao Zhijia,Zou Tao,Sun Xinqi
Abstract
Speech emotion recognition (SER) plays an important role in real-time applications of human-machine interaction. The Attention Mechanism is widely used to improve the performance of SER. However, the applicable rules of attention mechanism are not deeply discussed. This paper discussed the difference between Global-Attention and Self-Attention and explored their applicable rules to SER classification construction. The experimental results show that the Global-Attention can improve the accuracy of the sequential model, while the Self-Attention can improve the accuracy of the parallel model when conducting the model with the CNN and the LSTM. With this knowledge, a classifier (CNN-LSTM×2+Global-Attention model) for SER is proposed. The experiments result show that it could achieve an accuracy of 85.427% on the EMO-DB dataset.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference25 articles.
1. Robot magic show: human–robot interaction
2. Speech emotion recognition with deep convolutional neural networks
3. Speech emotion recognition based on multi-level residual convolutional neural networks;Zheng;Eng. Lett.,2020
4. A comparative analysis of traditional emotion classification method and deep learning based emotion classification method;Duan;Softw. Guide,2018
5. Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献