Affiliation:
1. School of Information Management, Nanjing University, China
Abstract
Intangible cultural heritage (ICH) songs convey folk lives and stories from different communities and nations through touching melodies and lyrics, which are rich in sentiments. Currently, researches about the sentiment analysis of songs are mainly based on lyrics, audios and lyric-audio. Recent studies have shown that deep spectrum features extracted from the spectrogram, generated from the audio, perform well in several speech-based tasks. However, studies combining spectrum features in multimodal sentiment analysis of songs are in a lack. Hence, we propose to combine the audio, lyric and spectrogram to conduct multimodal sentiment analysis for ICH songs, in a tri-modal fusion way. In addition, the correlations and interactions between different modalities are not considered fully. Here, we propose a multimodal song sentiment analysis model (MSSAM), including a strengthened audio features-guided attention (SAFGA) mechanism, which can learn intra- and inter-modal information effectively. First, we obtain strengthened audio features through the fusion of acoustic and spectrum features. Then, the strengthened audio features are used to guide the attention weights distribution of words in the lyric with help of SAFGA, which can make the model focus on the important words with sentiments and related with the sentiment of strengthened audio features, capturing modal interactions and complementary information. We take two world-level ICH lists, Jingju (京剧) and Kunqu (昆曲), as examples, and build sentiment analysis datasets. We compare the proposed model with other state-of-the-arts baselines in Jingju and Kunqu datasets. Experimental results demonstrate the superiority of our proposed model.
Funder
nanjing university
National Natural Science Foundation of China
Subject
Library and Information Sciences,Information Systems
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献