A Feature Selection Algorithm Based on Differential Evolution for English Speech Emotion Recognition
-
Published:2023-11-16
Issue:22
Volume:13
Page:12410
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Yue Liya1, Hu Pei2ORCID, Chu Shu-Chuan3ORCID, Pan Jeng-Shyang34
Affiliation:
1. Fanli Business School, Nanyang Institute of Technology, Nanyang 473004, China 2. School of Computer and Software, Nanyang Institute of Technology, Nanyang 473004, China 3. College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China 4. Department of Information Management, Chaoyang University of Technology, Taichung 413310, Taiwan
Abstract
The automatic identification of emotions from speech holds significance in facilitating interactions between humans and machines. To improve the recognition accuracy of speech emotion, we extract mel-frequency cepstral coefficients (MFCCs) and pitch features from raw signals, and an improved differential evolution (DE) algorithm is utilized for feature selection based on K-nearest neighbor (KNN) and random forest (RF) classifiers. The proposed multivariate DE (MDE) adopts three mutation strategies to solve the slow convergence of the classical DE and maintain population diversity, and employs a jumping method to avoid falling into local traps. The simulations are conducted on four public English speech emotion datasets: eNTERFACE05, Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Surrey Audio-Visual Expressed Emotion (SAEE), and Toronto Emotional Speech Set (TESS), and they cover a diverse range of emotions. The MDE algorithm is compared with PSO-assisted biogeography-based optimization (BBO_PSO), DE, and the sine cosine algorithm (SCA) on emotion recognition error, number of selected features, and running time. From the results obtained, MDE obtains the errors of 0.5270, 0.5044, 0.4490, and 0.0420 in eNTERFACE05, RAVDESS, SAVEE, and TESS based on the KNN classifier, and the errors of 0.4721, 0.4264, 0.3283 and 0.0114 based on the RF classifier. The proposed algorithm demonstrates excellent performance in emotion recognition accuracy, and it finds meaningful acoustic features from MFCCs and pitch.
Funder
Henan Provincial Philosophy and Social Science Planning Project Henan Province Key Research and Development and Promotion Special Project
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference31 articles.
1. De Bruyne, L., Karimi, A., De Clercq, O., Prati, A., and Hoste, V. (2022, January 20–25). Aspect-Based Emotion Analysis and Multimodal Coreference: A Case Study of Customer Comments on Adidas Instagram Posts. Proceedings of the Thirteenth Language Resources and Evaluation Conference, Marseille, France. 2. Pastor, M.A., Ribas, D., Ortega, A., Miguel, A., and Lleida, E. (2023). Cross-Corpus Training Strategy for Speech Emotion Recognition Using Self-Supervised Representations. Appl. Sci., 13. 3. A survey of speech emotion recognition in natural environment;Fahad;Digit. Signal Process.,2021 4. Residual-based graph convolutional network for emotion recognition in conversation for smart Internet of Things;Choi;Big Data,2021 5. Feature extraction algorithms to improve the speech emotion recognition rate;Koduru;Int. J. Speech Technol.,2020
|
|