A Video Question Answering Model Based on Knowledge Distillation
-
Published:2023-06-12
Issue:6
Volume:14
Page:328
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Shao Zhuang1, Wan Jiahui2, Zong Linlin23
Affiliation:
1. China Academy of Space Technology, Beijing 100094, China 2. Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software, Dalian University of Technology, Dalian 116620, China 3. State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Abstract
Video question answering (QA) is a cross-modal task that requires understanding the video content to answer questions. Current techniques address this challenge by employing stacked modules, such as attention mechanisms and graph convolutional networks. These methods reason about the semantics of video features and their interaction with text-based questions, yielding excellent results. However, these approaches often learn and fuse features representing different aspects of the video separately, neglecting the intra-interaction and overlooking the latent complex correlations between the extracted features. Additionally, the stacking of modules introduces a large number of parameters, making model training more challenging. To address these issues, we propose a novel multimodal knowledge distillation method that leverages the strengths of knowledge distillation for model compression and feature enhancement. Specifically, the fused features in the larger teacher model are distilled into knowledge, which guides the learning of appearance and motion features in the smaller student model. By incorporating cross-modal information in the early stages, the appearance and motion features can discover their related and complementary potential relationships, thus improving the overall model performance. Despite its simplicity, our extensive experiments on the widely used video QA datasets, MSVD-QA and MSRVTT-QA, demonstrate clear performance improvements over prior methods. These results validate the effectiveness of the proposed knowledge distillation approach.
Funder
Social Science Planning Foundation of Liaoning Province State Key Laboratory of Novel Software Technology, Nanjing University Dalian High-level Talent Innovation Support Plan
Subject
Information Systems
Reference29 articles.
1. Xu, D., Zhao, Z., Xiao, J., Wu, F., Zhang, H., He, X., and Zhuang, Y. (2017, January 23–27). Video Question Answering via Gradually Refined Attention over Appearance and Motion. Proceedings of the 25th ACM International Conference on Multimedia, San Francisco, CA, USA. 2. Antol, S., Agrawal, A., Lu, J., Mitchell, M., and Parikh, D. (2015, January 13–16). Vqa: Visual Question Answering. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile. 3. A Survey of Text Question Answering Techniques;Gupta;J. Comput. Appl.,2012 4. Gao, J., Ge, R., Chen, K., and Nevatia, R. (2018, January 18–22). Motion-appearance Co-memory Networks for Video Question Answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 5. Wang, X., and Gupta, A. (2018, January 8–14). Videos as Space-time Region Graphs. Proceedings of the European Conference on Computer Vision, Munich, Germany.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|