Enhanced transformer model for video caption generation

Author:

Varma Soumya1ORCID,Peter J. Dinesh1

Affiliation:

1. Department of CSE Karunya Institute of Technology and Sciences Coimbatore India

Abstract

AbstractAutomatic Video captioning system is a method of describing the content in a video by analysing its visual aspects with regard to space and time and producing a meaningful caption that explains the video. A decade of research in this area has resulted in a steep growth in the quality and appropriateness of the generated caption compared with the expected result. The research has been driven from the very basic method to most advanced transformer method. Machine generated caption of a video must be adhering to many expected standards. For humans, this task may be a trivial one, however its not the same for a machine to analyse the content and generate a semantically coherent description for it. The caption which is generated in a natural language must also adhere to its lexical and syntactical structure. The video captioning process is a culmination of computer vision and natural language processing tasks. Commencing with template based conventional approach, it has surpassed statistical method, traditional deep learning approaches and is now in the trend of using transformers. This work made an extensive study of the literature and has proposed an improved transformer‐based architecture for video captioning process. The transformer architecture made use of an encoder and decoder model that has two and three sublayers respectively. Multi‐head self attention and cross attention are part of the model which bring about very beneficial results. The decoder is auto‐regressive and uses a masked layer to prevent the model from foreseeing future words in the caption. An enhanced encoder‐decoder Transformer model with CNN for feature extraction has been used in our work. This model captures the long‐range dependencies and temporal relationships more effectively. The model has been evaluated with benchmark datasets and compared with state‐of‐the‐art methods and found to be slightly better in the performance. The performance scores are slightly varying for BLEU, METEOR, ROUGE and CIDEr. Furthermore, we propose the idea of curriculum learning if incorporated can improve the results again.

Publisher

Wiley

Subject

Artificial Intelligence,Computational Theory and Mathematics,Theoretical Computer Science,Control and Systems Engineering

Reference44 articles.

1. Chen C.‐F. Panda R. &Fan Q.(2021).RegionViT: Regional‐to‐local attention for vision transformers. arXiv:cs.CV/2106.02689. Retrieved fromhttps://arxiv.org/abs/2106.02689

2. Chen Y. Wang S. Zhang W. &Huang Q.(2018).Less is more: Picking informative frames for video captioning. Retrieved from: arXiv Preprint arXiv:1803.01457.

3. Das P. Xu C. Doell R. F. &Corso J. J.(2013).A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. IEEE CVPR.

4. Devlin J. Chang M.‐W. Lee K. &Toutanova K.(2018).BERT: Pretraining of deep bidirectional transformers for language understanding arXiv Preprint arXiv:1810.04805.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Video Captioning Using Large Language Models;2024 3rd International Conference for Innovation in Technology (INOCON);2024-03-01

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3