Abstract
Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video’s temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.
Funder
The National Research Foundation of Korea(NRF) grant funded by the Korea government(*MSIT) *Ministry of Science and ICT
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference47 articles.
1. Spatio-temporal graph for video captioning with knowledge distillation;Pan;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2020
2. Video paragraph captioning using hierarchical recurrent neural networks;Yu;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2016
3. Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning;Aafaq;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2019
4. SibNet: Sibling Convolutional Encoder for Video Captioning
5. Object relational graph with teacher-recommended learning for video captioning;Zhang;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2020
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献