Author:
Cao Shuqiang,Wang Bairui,Zhang Wei,Ma Lin
Abstract
In this paper, we propose a novel method to mine the commonsense knowledge shared between the video and text modalities for video-text retrieval, namely visual consensus modeling. Different from the existing works, which learn the video and text representations and their complicated relationships solely based on the pairwise video-text data, we make the first attempt to model the visual consensus by mining the visual concepts from videos and exploiting their co-occurrence patterns within the video and text modalities with no reliance on any additional concept annotations. Specifically, we build a shareable and learnable graph as the visual consensus, where the nodes denoting the mined visual concepts and the edges connecting the nodes representing the co-occurrence relationships between the visual concepts. Extensive experimental results on the public benchmark datasets demonstrate that our proposed method, with the ability to effectively model the visual consensus, achieves state-of-the-art performances on the bidirectional video-text retrieval task. Our code is available at https://github.com/sqiangcao99/VCM.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Learning Commonsense-aware Moment-Text Alignment for Fast Video Temporal Grounding;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-09-12
2. Relation Triplet Construction for Cross-modal Text-to-Video Retrieval;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26
3. MuMUR: Multilingual Multimodal Universal Retrieval;Information Retrieval Journal;2023-09-25
4. A Video Captioning Method Based on Visual-Text Semantic Association;2023 8th International Conference on Intelligent Computing and Signal Processing (ICSP);2023-04-21
5. Action knowledge for video captioning with graph neural networks;Journal of King Saud University - Computer and Information Sciences;2023-04