Affiliation:
1. Department of CSE Karunya Institute of Technology and Sciences Coimbatore India
Abstract
AbstractAutomatic Video captioning system is a method of describing the content in a video by analysing its visual aspects with regard to space and time and producing a meaningful caption that explains the video. A decade of research in this area has resulted in a steep growth in the quality and appropriateness of the generated caption compared with the expected result. The research has been driven from the very basic method to most advanced transformer method. Machine generated caption of a video must be adhering to many expected standards. For humans, this task may be a trivial one, however its not the same for a machine to analyse the content and generate a semantically coherent description for it. The caption which is generated in a natural language must also adhere to its lexical and syntactical structure. The video captioning process is a culmination of computer vision and natural language processing tasks. Commencing with template based conventional approach, it has surpassed statistical method, traditional deep learning approaches and is now in the trend of using transformers. This work made an extensive study of the literature and has proposed an improved transformer‐based architecture for video captioning process. The transformer architecture made use of an encoder and decoder model that has two and three sublayers respectively. Multi‐head self attention and cross attention are part of the model which bring about very beneficial results. The decoder is auto‐regressive and uses a masked layer to prevent the model from foreseeing future words in the caption. An enhanced encoder‐decoder Transformer model with CNN for feature extraction has been used in our work. This model captures the long‐range dependencies and temporal relationships more effectively. The model has been evaluated with benchmark datasets and compared with state‐of‐the‐art methods and found to be slightly better in the performance. The performance scores are slightly varying for BLEU, METEOR, ROUGE and CIDEr. Furthermore, we propose the idea of curriculum learning if incorporated can improve the results again.
Subject
Artificial Intelligence,Computational Theory and Mathematics,Theoretical Computer Science,Control and Systems Engineering
Reference44 articles.
1. Chen C.‐F. Panda R. &Fan Q.(2021).RegionViT: Regional‐to‐local attention for vision transformers. arXiv:cs.CV/2106.02689. Retrieved fromhttps://arxiv.org/abs/2106.02689
2. Chen Y. Wang S. Zhang W. &Huang Q.(2018).Less is more: Picking informative frames for video captioning. Retrieved from: arXiv Preprint arXiv:1803.01457.
3. Das P. Xu C. Doell R. F. &Corso J. J.(2013).A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. IEEE CVPR.
4. Devlin J. Chang M.‐W. Lee K. &Toutanova K.(2018).BERT: Pretraining of deep bidirectional transformers for language understanding arXiv Preprint arXiv:1810.04805.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Video Captioning Using Large Language Models;2024 3rd International Conference for Innovation in Technology (INOCON);2024-03-01