Abstract
In this paper, we investigate the problem of cross-corpus speech emotion recognition (SER), in which the training (source) and testing (target) speech samples belong to different corpora. This case thus leads to a feature distribution mismatch between the source and target speech samples. Hence, the performance of most existing SER methods drops sharply. To solve this problem, we propose a simple yet effective transfer subspace learning method called joint distribution implicitly aligned subspace learning (JIASL). The basic idea of JIASL is very straightforward, i.e., building an emotion discriminative and corpus invariant linear regression model under an implicit distribution alignment strategy. Following this idea, we first make use of the source speech features and emotion labels to endow such a regression model with emotion-discriminative ability. Then, a well-designed reconstruction regularization term, jointly considering the marginal and conditional distribution alignments between the speech samples in both corpora, is adopted to implicitly enable the regression model to predict the emotion labels of target speech samples. To evaluate the performance of our proposed JIASL, extensive cross-corpus SER experiments are carried out, and the results demonstrate the promising performance of the proposed JIASL in coping with the tasks of cross-corpus SER.
Funder
Natural National Science Foundation of China
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference39 articles.
1. Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing;Schuller,2013
2. Toward effective automatic recognition systems of emotion in speech;Busso,2013
3. Speech emotion recognition
4. Audiovisual behavior modeling by combined feature spaces;Schuller;Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07,2007
5. Acoustic emotion recognition: A benchmark comparison of performances;Schuller;Proceedings of the 2009 IEEE Workshop on Automatic Speech Recognition & Understanding,2009
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Optimal Feature Learning for Speech Emotion Recognition – A DeepNet Approach;2023 International Conference on Data Science and Network Security (ICDSNS);2023-07-28
2. Deep Implicit Distribution Alignment Networks for cross-Corpus Speech Emotion Recognition;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04