Publisher
Springer Nature Singapore
Reference39 articles.
1. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, MA
2. Zadeh A, Zellers R, Pincus E, et al (2016) Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259
3. Zadeh AAB, Liang PP, Poria S, et al (2018) Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (1): 2236–2246
4. Busso C, Bulut M, Lee CC et al (2008) IEMOCAP: interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335–359
5. Poria S, Hazarika D, Majumder N, et al (2019) MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 527–536