Author:
Wang Shunjie,Cai Guoyong,Lv Guangrui
Abstract
AbstractAspect-level multimodal sentiment analysis is the fine-grained sentiment analysis task of predicting the sentiment polarity of given aspects in multimodal data. Most existing multimodal sentiment analysis approaches focus on mining and fusing multimodal global features, overlooking the correlation of more fine-grained multimodal local features, which considerably limits the semantic relevance between different modalities. Therefore, a novel aspect-level multimodal sentiment analysis method based on global–local features fusion with co-attention (GLFFCA) is proposed to comprehensively explore multimodal associations from both global and local perspectives. Specially, an aspect-guided global co-attention module is designed to capture aspect-guided intra-modality global correlations. Meanwhile, a gated local co-attention module is introduced to capture the adaptive association alignment of multimodal local features. Following that, a global–local multimodal feature fusion module is constructed to integrate global–local multimodal features in a hierarchical manner. Extensive experiments on the Twitter-2015 dataset and Twitter-2017 dataset validate the effectiveness of the proposed method, which can achieve better aspect-level multimodal sentiment analysis performance compared with other related methods.
Funder
National Science Fundation of China
Project of Guangxi Key Lab of Trusted Software
CCF-Zhipu AI Large Model Fund
Publisher
Springer Science and Business Media LLC
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献