Abstract
The challenge of video grounding - localizing activities in an untrimmed video via a natural language query - is to tackle the semantics of vision and language consistently along the temporal dimension. Most existing proposal-based methods are trapped by computational cost with extensive candidate proposals. In this paper, we propose a novel proposal-free framework named Contextual Pyramid Network (CPNet) to investigate multi-scale temporal correlation in the video. Specifically, we propose a pyramid network to extract 2D contextual correlation maps at different temporal scales (T*T, T/2*T/2, T/4*T/4), where the 2D correlation map (past to current & future to current) is designed to model all the relations of any two moments in the video. In other words, CPNet progressively replenishes the temporal contexts and refines the location of queried activity by enlarging the temporal receptive fields. Finally, we implement a temporal self-attentive regression (i.e., proposal-free regression) to predict the activity boundary from the above hierarchical context-aware 2D correlation maps. Extensive experiments on ActivityNet Captions, Charades-STA, and TACoS datasets demonstrate that our approach outperforms state-of-the-art methods.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
29 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Global routing between capsules;Pattern Recognition;2024-04
2. CMGN: Cross-Modal Grounding Network for Temporal Sentence Retrieval in Video;Computer Supported Cooperative Work and Social Computing;2024
3. Exploiting Diverse Feature for Multimodal Sentiment Analysis;Proceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation;2023-10-29
4. Data Augmentation for Human Behavior Analysis in Multi-Person Conversations;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26
5. Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Grounding;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26