Affiliation:
1. Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, P.R. China
2. Hangzhou Data for Truth Technology Co., Ltd., Hangzhou, P.R. China
3. Zhejiang Big Data Exchange Center, Jiaxing, P.R. China
Abstract
Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the perceived emotion and felt emotion of music, respectively. The results show that real-time individual features (e.g., preference for target music and mechanism indices) can significantly improve the model’s effect, and stable individual features (e.g., sex, music experience, and personality) have no effect. Compared with the recognition models of perceived emotions, the individual features have greater effects on the recognition models of felt emotions.
Subject
Psychology (miscellaneous),Music
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献