Video Question Answering via Knowledge-based Progressive Spatial-Temporal Attention Network
-
Published:2019-08-12
Issue:2s
Volume:15
Page:1-22
-
ISSN:1551-6857
-
Container-title:ACM Transactions on Multimedia Computing, Communications, and Applications
-
language:en
-
Short-container-title:ACM Trans. Multimedia Comput. Commun. Appl.
Author:
Jin Weike,Zhao Zhou,Li Yimeng,Li Jie,Xiao Jun,Zhuang Yueting
Abstract
Visual Question Answering (VQA) is a challenging task that has gained increasing attention from both the computer vision and the natural language processing communities in recent years. Given a question in natural language, a VQA system is designed to automatically generate the answer according to the referenced visual content. Though there recently has been much intereset in this topic, the existing work of visual question answering mainly focuses on a single static image, which is only a small part of the dynamic and sequential visual data in the real world. As a natural extension, video question answering (VideoQA) is less explored. Because of the inherent temporal structure in the video, the approaches of ImageQA may be ineffectively applied to video question answering. In this article, we not only take the spatial and temporal dimension of video content into account but also employ an external knowledge base to improve the answering ability of the network. More specifically, we propose a knowledge-based progressive spatial-temporal attention network to tackle this problem. We obtain both objects and region features of the video frames from a region proposal network. The knowledge representation is generated by a word-level attention mechanism using the comment information of each object that is extracted from DBpedia. Then, we develop a question-knowledge-guided progressive spatial-temporal attention network to learn the joint video representation for video question answering task. We construct a large-scale video question answering dataset. The extensive experiments based on two different datasets validate the effectiveness of our method.
Funder
Zhejiang Natural Science Foundation
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Video question answering via traffic knowledge database and question classification;Multimedia Systems;2024-01-16
2. Hierarchical Synergy-Enhanced Multimodal Relational Network for Video Question Answering;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-12-11
3. Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-10-23
4. Visual Paraphrase Generation with Key Information Retained;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-05-30
5. Transformer-Based Visual Grounding with Cross-Modality Interaction;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-05-30