1. Carlos Busso , Murtaza Bulut , Chi-Chun Lee , Abe Kazemzadeh , Emily Mower , Samuel Kim , Jeannette N Chang , Sungbok Lee , and Shrikanth S Narayanan . 2008 . IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42, 4 (2008), 335–359. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42, 4 (2008), 335–359.
2. Neri E Cibau , Enrique M Albornoz , and Hugo L Rufiner . 2013. Speech emotion recognition using a deep autoencoder. Anales de la XV Reunion de Procesamiento de la Informacion y Control 16 ( 2013 ), 934–939. Neri E Cibau, Enrique M Albornoz, and Hugo L Rufiner. 2013. Speech emotion recognition using a deep autoencoder. Anales de la XV Reunion de Procesamiento de la Informacion y Control 16 (2013), 934–939.
3. Emotional states associated with music: Classification, prediction of changes, and consideration in recommendation;Deng J;ACM Transactions on Interactive Intelligent Systems (TiiS),2015
4. Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018).
5. Moataz El Ayadi , Mohamed S Kamel , and Fakhri Karray . 2011. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern recognition 44, 3 ( 2011 ), 572–587. Moataz El Ayadi, Mohamed S Kamel, and Fakhri Karray. 2011. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern recognition 44, 3 (2011), 572–587.