Affiliation:
1. College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
Abstract
Currently, there is a great deal of interest in multimodal aspect-level sentiment classification using both textual and visual information, which changes the traditional use of only single-modal to identify sentiment polarity. Considering that existing methods could be strengthened in terms of classification accuracy, we conducted a study on aspect-level multimodal sentiment classification with the aim of exploring the interaction between textual and visual features. Specifically, we construct a multimodal aspect-level sentiment classification framework with multi-image gate and fusion networks called MFSC. MFSC consists of four parts, i.e., text feature extraction, visual feature extraction, text feature enhancement, and multi-feature fusion. Firstly, a bidirectional long short-term memory network is adopted to extract the initial text feature. Based on this, a text feature enhancement strategy is designed, which uses text memory network and adaptive weights to extract the final text features. Meanwhile, a multi-image gate method is proposed for fusing features from multiple images and filtering out irrelevant noise. Finally, a text-visual feature fusion method based on an attention mechanism is proposed to better improve the classification performance by capturing the association between text and images. Experimental results show that MFSC has advantages in classification accuracy and macro-F1.
Funder
the Key Program of Chongqing Education Science Planning Project
Reference42 articles.
1. Opinion mining and sentiment analysis;Pang;Found. Trends Int. Ret,2008
2. Kiritchenko, S., Zhu, X., Cherry, C., and Mohammad, S. (2014, January 23–24). Nrc-Canada-2014: Detecting aspects and sentiment in customer reviews. Proceedings of the 8th International Workshop on Semantic Evaluation, Dublin, Ireland.
3. Reducing the dimensionality of data with neural networks;Hinton;Science,2006
4. Deep learning: Yesterday, today, and tomorrow;Yu;J. Comput. Res. Dev.,2013
5. Zhang, Q., Fu, J., Liu, X., and Huang, X. (2018, January 2–7). Adaptive co-attention network for named entity recognition in Tweets. Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.