Video localized caption generation framework for industrial videos

Author:

Khurana Khushboo1,Deshpande Umesh2

Affiliation:

1. Department of Computer Science and Engineering, Shri Ramdeobaba College of Engineering and Management, Nagpur, India

2. Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India

Abstract

In this information age, there is exponential growth in visual content and video captioning can address many real-life applications. Automatic generation of video captions can be beneficial to comprehend a video in a short time, assist in faster information retrieval, video analysis, indexing, report generation, etc. Captioning of industrial videos is of importance to get a visual and textual summary of the work ongoing in the industry. The generated captioned summary of the video can assist in remote monitoring of industries and these captions can be utilized for video question-answering, video segment extraction, productivity analysis, etc. Due to the presence of diverse events processing of industrial videos are more challenging compared to other domains. In this paper, we address the real-life application of generating the descriptions for the videos of a labor-intensive industry. We propose a keyframe-based approach for the generation of video captions. The framework produces a video summary by extraction of keyframes, thereby reducing the video captioning task to image captioning. These keyframes are passed to the image captioning model for description generation. Utilizing these individual frame captions, multi-caption descriptions of a video are generated with a unique start and end time of each caption. For image captioning, a merge encoder-decoder model with a stacked decoder for caption generation is used. We have performed experimentation on a dataset specifically created for the small-scale industry. We have also shown that data augmentation on the small dataset can greatly benefit the generation of remarkably good video descriptions. Results of extensive experimentation performed by utilizing different image encoders, language encoders, and decoders in the merge encoder-decoder model are reported. Apart from presenting the results on domain-specific data, results on domain-independent datasets are also presented to show the applicability of the technique in general. Performance comparison with existing datasets - OVSD and Flickr8k and Flickr30k are reported to demonstrate the scalability of our method.

Publisher

IOS Press

Subject

Artificial Intelligence,General Engineering,Statistics and Probability

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3