A Feature Selection Algorithm Based on Differential Evolution for English Speech Emotion Recognition

Author:

Yue Liya1,Hu Pei2ORCID,Chu Shu-Chuan3ORCID,Pan Jeng-Shyang34

Affiliation:

1. Fanli Business School, Nanyang Institute of Technology, Nanyang 473004, China

2. School of Computer and Software, Nanyang Institute of Technology, Nanyang 473004, China

3. College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China

4. Department of Information Management, Chaoyang University of Technology, Taichung 413310, Taiwan

Abstract

The automatic identification of emotions from speech holds significance in facilitating interactions between humans and machines. To improve the recognition accuracy of speech emotion, we extract mel-frequency cepstral coefficients (MFCCs) and pitch features from raw signals, and an improved differential evolution (DE) algorithm is utilized for feature selection based on K-nearest neighbor (KNN) and random forest (RF) classifiers. The proposed multivariate DE (MDE) adopts three mutation strategies to solve the slow convergence of the classical DE and maintain population diversity, and employs a jumping method to avoid falling into local traps. The simulations are conducted on four public English speech emotion datasets: eNTERFACE05, Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Surrey Audio-Visual Expressed Emotion (SAEE), and Toronto Emotional Speech Set (TESS), and they cover a diverse range of emotions. The MDE algorithm is compared with PSO-assisted biogeography-based optimization (BBO_PSO), DE, and the sine cosine algorithm (SCA) on emotion recognition error, number of selected features, and running time. From the results obtained, MDE obtains the errors of 0.5270, 0.5044, 0.4490, and 0.0420 in eNTERFACE05, RAVDESS, SAVEE, and TESS based on the KNN classifier, and the errors of 0.4721, 0.4264, 0.3283 and 0.0114 based on the RF classifier. The proposed algorithm demonstrates excellent performance in emotion recognition accuracy, and it finds meaningful acoustic features from MFCCs and pitch.

Funder

Henan Provincial Philosophy and Social Science Planning Project

Henan Province Key Research and Development and Promotion Special Project

Publisher

MDPI AG

Subject

Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science

Reference31 articles.

1. De Bruyne, L., Karimi, A., De Clercq, O., Prati, A., and Hoste, V. (2022, January 20–25). Aspect-Based Emotion Analysis and Multimodal Coreference: A Case Study of Customer Comments on Adidas Instagram Posts. Proceedings of the Thirteenth Language Resources and Evaluation Conference, Marseille, France.

2. Pastor, M.A., Ribas, D., Ortega, A., Miguel, A., and Lleida, E. (2023). Cross-Corpus Training Strategy for Speech Emotion Recognition Using Self-Supervised Representations. Appl. Sci., 13.

3. A survey of speech emotion recognition in natural environment;Fahad;Digit. Signal Process.,2021

4. Residual-based graph convolutional network for emotion recognition in conversation for smart Internet of Things;Choi;Big Data,2021

5. Feature extraction algorithms to improve the speech emotion recognition rate;Koduru;Int. J. Speech Technol.,2020

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3