Author:
Li Jiaming,Tao Chuanqi,Guan Donghai
Publisher
Springer Nature Singapore
Reference45 articles.
1. Morency, L.P., Mihalcea, R., Doshi, P.: Towards multimodal sentiment analysis: harvesting opinions from the web. In: Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 169–176 (2011)
2. Hazarika, D., Zimmermann, R., Poria, S.: MISA: modality-invariant and –specific representations for multimodal sentiment analysis. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1122–1131 (2020)
3. Yu, W., Xu, H., Yuan, Z., Wu, J.: Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 10790–10797 (2021)
4. Lin, R., Hu, H.: Multimodal contrastive learning via uni-Modal coding and cross-Modal prediction for multimodal sentiment analysis. In: Findings of the Association for Computational Linguistics, EMNLP 2022, pp. 511–523 (2022)
5. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 4171–4186 (2019)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Progressive Fusion Network with Mixture of Experts for Multimodal Sentiment Analysis;2024 16th International Conference on Advanced Computational Intelligence (ICACI);2024-05-16