Affiliation:
1. Department of Information and Computing Sciences Utrecht University, Netherlands. y.song5@uu.nl
2. School of Computing and Information Systems University of Melbourne, Australia. d.beck@unimelb.edu.au
Abstract
Abstract
Most previous work in music emotion recognition assumes a single or a few song-level labels for the whole song. While it is known that different emotions can vary in intensity within a song, annotated data for this setup is scarce and difficult to obtain. In this work, we propose a method to predict emotion dynamics in song lyrics without song-level supervision. We frame each song as a time series and employ a State Space Model (SSM), combining a sentence-level emotion predictor with an Expectation-Maximization (EM) procedure to generate the full emotion dynamics. Our experiments show that applying our method consistently improves the performance of sentence-level baselines without requiring any annotated songs, making it ideal for limited training data scenarios. Further analysis through case studies shows the benefits of our method while also indicating the limitations and pointing to future directions.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference92 articles.
1. Transformer-based approach towards music emotion recognition from lyrics;Agrawal,2021
2. Combining BERT with static word embeddings for categorizing social media;Alghanmi,2020
3. Singing in the brain: Independence of lyrics and tunes;Besson;Psychological Science,1998
4. An analysis of annotated corpora for emotion classification in text;Bostan,2018
5. Language models are few-shot learners;Brown,2020
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献