Affiliation:
1. Tsinghua Shenzhen International Graduate School, China
2. Meituan, China
3. Tsinghua University, China
4. Columbia University, USA
Abstract
Temporal Sentence Grounding in Videos (TSGV), which aims to ground a natural language sentence that indicates complex human activities in an untrimmed video, has drawn widespread attention over the past few years. However, recent studies have found that current benchmark datasets may have obvious moment annotation biases, enabling several simple baselines even without training to achieve state-of-the-art (SOTA) performance. In this paper, we take a closer look at existing evaluation protocols for TSGV, and find that both the prevailing dataset splits and evaluation metrics are the devils that lead to untrustworthy benchmarking. Therefore, we propose to re-organize the two widely-used datasets, making the ground-truth moment distributions different in the training and test splits,
i.e
., out-of-distribution (OOD) test. Meanwhile, we introduce a new evaluation metric “dR@
n
,IoU=
m
” that discounts the basic recall scores especially with small IoU thresholds, so as to alleviate the inflating evaluation caused by biased datasets with a large proportion of long ground-truth moments. New benchmarking results indicate that our proposed evaluation protocols can better monitor the research progress in TSGV. Furthermore, we propose a novel causality-based Multi-branch Deconfounding Debiasing (MDD) framework for unbiased moment prediction. Specifically, we design a multi-branch deconfounder to eliminate the effects caused by multiple confounders with causal intervention. In order to help the model better align the semantics between sentence queries and video moments, we enhance the representations during feature encoding. Specifically, for textual information, the query is parsed into several verb-centered phrases to obtain a more fine-grained textual feature. For visual information, the positional information has been decomposed from the moment features to enhance the representations of moments with diverse locations. Extensive experiments demonstrate that our proposed approach can achieve competitive results among existing SOTA approaches and outperform the base model with great gains.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Reference60 articles.
1. Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
2. Rémi Cadène , Corentin Dancette , Hedi Ben-younes, Matthieu Cord , and Devi Parikh . 2019 . RUBi: Reducing Unimodal Biases for Visual Question Answering . In Proceedings of the International Conference on Neural Information Processing Systems. 839–850 . Rémi Cadène, Corentin Dancette, Hedi Ben-younes, Matthieu Cord, and Devi Parikh. 2019. RUBi: Reducing Unimodal Biases for Visual Question Answering. In Proceedings of the International Conference on Neural Information Processing Systems. 839–850.
3. On Pursuit of Designing Multi-modal Transformer for Video Grounding
4. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
5. Temporally Grounding Natural Sentence in Video
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献