Author:
Lu Cheng,Tang Chuangao,Zhang Jiacheng,Zong Yuan
Abstract
Cross-corpus speech emotion recognition (SER) is a challenging task, and its difficulty lies in the mismatch between the feature distributions of the training (source domain) and testing (target domain) data, leading to the performance degradation when the model deals with new domain data. Previous works explore utilizing domain adaptation (DA) to eliminate the domain shift between the source and target domains and have achieved the promising performance in SER. However, these methods mainly treat cross-corpus tasks simply as the DA problem, directly aligning the distributions across domains in a common feature space. In this case, excessively narrowing the domain distance will impair the emotion discrimination of speech features since it is difficult to maintain the completeness of the emotion space only by an emotion classifier. To overcome this issue, we propose a progressively discriminative transfer network (PDTN) for cross-corpus SER in this paper, which can enhance the emotion discrimination ability of speech features while eliminating the mismatch between the source and target corpora. In detail, we design two special losses in the feature layers of PDTN, i.e., emotion discriminant loss Ld and distribution alignment loss La. By incorporating prior knowledge of speech emotion into feature learning (i.e., high and low valence speech emotion features have their respective cluster centers), we integrate a valence-aware center loss Lv and an emotion-aware center loss Lc as the Ld to guarantee the discriminative learning of speech emotions except an emotion classifier. Furthermore, a multi-layer distribution alignment loss La is adopted to more precisely eliminate the discrepancy of feature distributions between the source and target domains. Finally, through the optimization of PDTN by combining three losses, i.e., cross-entropy loss Le, Ld, and La, we can gradually eliminate the domain mismatch between the source and target corpora while maintaining the emotion discrimination of speech features. Extensive experimental results of six cross-corpus tasks on three datasets, i.e., Emo-DB, eNTERFACE, and CASIA, reveal that our proposed PDTN outperforms the state-of-the-art methods.
Funder
the Scientific Research Foundation of Graduate School of Southeast University
Subject
General Physics and Astronomy
Reference36 articles.
1. Emotion recognition in human-computer interaction;IEEE Signal Process. Mag.,2001
2. Intelligent signal processing for affective computing;IEEE Signal Process. Mag.,2021
3. Lu, C., Zheng, W., Li, C., Tang, C., Liu, S., Yan, S., and Zong, Y. (2018, January 16–20). Multiple spatio-temporal feature learning for video-based emotion recognition in the wild. Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA.
4. Li, S., Zheng, W., Zong, Y., Lu, C., Tang, C., Jiang, X., Liu, J., and Xia, W. (2019, January 14–18). Bi-modality fusion for emotion recognition in the wild. Proceedings of the 2019 International Conference on Multimodal Interaction, Suzhou, China.
5. EEG emotion recognition using dynamical graph convolutional neural networks;IEEE Trans. Affect. Comput.,2018
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献