Fine-Grained Emotional Calculation of Emotional Expression in Modern Visual Communication Designs
Author:
Zhang Yimiao1, Xie Linyun2, Ji Hongfang1
Affiliation:
1. Department of Information Engineering , Jiangxi Water Resources Institute , Nanchang , Jiangxi , , China . 2. College of Arts & Media, Nanchang Institute of Science and Technology , Nanchang , Jiangxi , , China .
Abstract
Abstract
In the context of the information age, mining for emotions has become a popular topic today, and deep learning plays an important role in the task of sentiment analysis. In this study, we propose a LE-CNN-MBiLSTM fine-grained sentiment analysis model for the task of fine-grained sentiment computation for emotional expression in visual communication design. The model combines the ERNIE model and introduces a parallel CNN and dual-channel BiLSTM structure, which first mines multiple local key features in the text with CNN, then extracts the contextual semantics with BiLSTM, and extracts the fusion features by CNN-BiLSTM. Fine-grained sentiment analysis tasks are well-served by the model in this paper, with an accuracy of 93.58% and a loss function value of 0.18, respectively. Using the model to analyze the corpus of comments on visual communication design works, positive and negative emotions dominated the samples, each accounting for 50%, and the expression of sadness was particularly prominent. The model in this paper can be applied in fine-grained sentiment computation for visual communication design and can be migrated to other natural language processing domains, thus providing a new idea for the construction of network models for text sentiment analysis.
Publisher
Walter de Gruyter GmbH
Reference18 articles.
1. Liao, Q., Wang, D., & Xu, M. (2022). Category attention transfer for efficient fine-grained visual categorization. Pattern recognition letters. 2. Yang, L., Wang, P., Liu, C., Gao, Z., & Gaob, W. (2021). Towards fine-grained human pose transfer with detail replenishing network. IEEE Transactions on Image Processing, PP(99), 1–1. 3. Teyssier, M., Bailly, G., Pelachaud, C., & Lecolinet, E. (2020). Conveying emotions through device-initiated touch. IEEE Transactions on Affective Computing, PP(99), 1–1. 4. Wu, T., Peng, J., Zhang, W., Zhang, H., Tan, S., & Yi, F., et al. (2022). Video sentiment analysis with bimodal information-augmented multi-head attention. Knowledge-based systems(Jan.10), 235. 5. Eskimez, S. E., Zhang, Y., & Duan, Z. (2021). Speech driven talking face generation from a single image and an emotion condition. IEEE Transactions on Multimedia.
|
|