Affiliation:
1. University of Science and Technology of China, Hefei, Anhui, China
2. Tsinghua University, Shenzhen, Guangdong, China
3. HUAWEI Technologies, Hangzhou, Zhejiang, China
Abstract
The topic of multimodal conversation systems has recently garnered significant attention across various industries, including travel and retail, among others. While pioneering works in this field have shown promising performance, they often focus solely on context information at the utterance level, overlooking the context-aware dependencies of multimodal semantic elements like words and images. Furthermore, the ordinal information of images, which indicates the relevance between visual context and users’ demands, remains underutilized during the integration of visual content. Additionally, the exploration of how to effectively utilize corresponding attributes provided by users when searching for desired products is still largely unexplored. To address these challenges, we propose PMATE, a
P
osition-aware
M
ultimodal di
A
logue system with seman
T
ic
E
lements. Specifically, to obtain semantic representations at the element level, we first unfold the multimodal historical utterances and devise a position-aware multimodal element-level encoder. This component considers all images that may be relevant to the current turn and introduces a novel position-aware image selector to choose related images before fusing the information from the two modalities. Finally, we present a knowledge-aware two-stage decoder and an attribute-enhanced image searcher for the tasks of generating textual responses and selecting image responses, respectively. We extensively evaluate our model on two large-scale multimodal dialogue datasets, and the results of our experiments demonstrate that our approach outperforms several baseline methods.
Funder
National Natural Science Foundation of China
USTC Research Funds of the Double First-Class Initiative
China Postdoctoral Science Foundation
Publisher
Association for Computing Machinery (ACM)
Reference60 articles.
1. Layer normalization;Ba Jimmy Lei;arXiv preprint arXiv:1607.06450,2016
2. Hardik Chauhan, Mauajama Firdaus, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Ordinal and attribute aware response generation in a multimodal dialogue system. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 5437–5447.
3. Multimodal dialog systems with dual knowledge-enhanced generative pretrained language model;Chen Xiaolin;arXiv preprint arXiv:2207.07934,2022
4. Chen Cui, Wenjie Wang, Xuemeng Song, Minlie Huang, Xin-Shun Xu, and Liqiang Nie. 2019. User attention-guided multimodal dialog systems. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 445–454.
5. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the 2nd International Conference on Human Language Technology Research. 138–145.