Affiliation:
1. The School of Software, Shandong University, China
2. The School of Information Technology and Electrical Engineering, The University of Queensland, Australia
Abstract
Video captioning aims to automatically generate natural language sentences describing the content of a video. Although encoder-decoder-based models have achieved promising progress, it is still very challenging to effectively model the linguistic behavior of humans in generating video captions. In this paper, we propose a novel video captioning model by learning from
gLobal sEntence and looking AheaD, LEAD
for short. Specifically, LEAD consists of two modules: a
Vision Module (VM)
and a
Language Module (LM)
. Thereinto, VM is a novel attention network, which can map visual features to high-level language space and model entire sentences explicitly. LM can not only effectively make use of the information of the previous sequence when generating the current word, but also have a look at the future word. Therefore, based on VM and LM, LEAD can obtain global sentence information and future word information to make video captioning more like a fill-in-the-blank task than a word-by-word sentence generation. In addition, we also propose an autonomous strategy and a multi-stage training scheme to optimize the model, which can mitigate the problem of information leakage. Extensive experiments show that LEAD outperforms some state-of-the-art methods on MSR-VTT, MSVD, and VATEX, demonstrating the effectiveness of the proposed approach in video captioning. In addition, we release the code of our proposed model to be publicly available.
1
Funder
National Natural Science Foundation of China
Natural Science Foundation of Shandong Province
Major Program of the National Natural Science Foundation of China
Quan Cheng Laboratory
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Reference66 articles.
1. Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual Encoding for Video Captioning
2. Nicolas Ballas, Li Yao, Chris Pal, et al.2016. Delving deeper into convolutional networks for learning video representations. In Proceedings of the International Conference on Learning Representations.
3. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 65–72.
4. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
5. David L. Chen and William B. Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 190–200.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献