Publisher
Springer Nature Singapore
Reference13 articles.
1. Pan, Z, Luo Z, Yang J, Li H (2020) Multi-modal attention for speech emotion recognition. INTERSPEECH 2020, 25–29 Oct 2020. Shangai, China
2. Yin Y, Huang B, Wu Y, Soleymani M, Speaker-invariant adversarial domain adaption for emotion recognition. In: ICMI ‘20: international conference on multimodal interaction
3. Lian Z, Tao J, Liu B, Huang J, Yang Z, Li R (2020) Context-dependent domain adversarial neural network for multimodal emotion recognition. INTERSPEECH Oct 25–29 Oct 2020, Shangai, China
4. Ye J, Wen X, Wei Y, Xu Y, Liu K, Shan H (2022) Temporal modeling matters: a novel temporal emotional modeling approach for speech emotion recognition. Research Gate
5. Yazdani A, Simchi H, Shekofteh Y (2021) Emotion recognition in Persian speech using deep neural networks. In: 11th international conference on computer and knowledge engineering (ICCKE 2021), 28–29 Oct 2021. Ferdowsi University of Mashhad