Abstract
AbstractSpeech emotion analysis is one of the most basic requirements for the evolution of Artificial Intelligence (AI) in the field of human–machine interaction. Accurate emotion recognition in speech can be effective in applications such as online support, lie detection systems and customer feedback analysis. However, the existing techniques for this field have not yet met sufficient development. This paper presents a new method to improve the performance of emotion analysis in speech. The proposed method includes the following steps: pre-processing, feature description, feature extraction, and classification. The initial description of speech features in the proposed method is done by using the combination of spectro-temporal modulation (STM) and entropy features. Also, a Convolutional Neural Network (CNN) is utilized to reduce the dimensions of these features and extract the features of each signal. Finally, the combination of gamma classifier (GC) and Error-Correcting Output Codes (ECOC) is applied to classify features and extract emotions in speech. The performance of the proposed method has been evaluated using two datasets, Berlin and ShEMO. The results show that the proposed method can recognize speech emotions in the Berlin and ShEMO datasets with an average accuracy of 93.33 and 85.73%, respectively, which is at least 6.67% better than compared methods.
Publisher
Springer Science and Business Media LLC
Reference34 articles.
1. Kadiri, S. R. & Alku, P. Excitation features of speech for speaker-specific emotion detection. IEEE Access 8, 60382–60391 (2020).
2. Ramesh, S., Gomathi, S., Sasikala, S. & Saravanan, T. R. Automatic speech emotion detection using hybrid of gray wolf optimizer and naïve Bayes. Int. J. Speech Technol. 2, 1–8 (2021).
3. Lalitha, S., Tripathi, S. & Gupta, D. Enhanced speech emotion detection using deep neural networks. Int. J. Speech Technol. 22, 497–510 (2019).
4. Atmaja, B. T., Sasou, A. & Akagi, M. Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion. Speech Commun. 140, 11–28 (2022).
5. Saxena, A., Khanna, A. & Gupta, D. Emotion recognition and detection methods: A comprehensive survey. J. Artif. Intell. Syst. 2(1), 53–79 (2020).
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献