Action-aware Linguistic Skeleton Optimization Network for Non-autoregressive Video Captioning

Author:

Chen Shuqin1ORCID,Zhong Xian2ORCID,Zhang Yi3ORCID,Zhu Lei4ORCID,Li Ping5ORCID,Yang Xiaokang6ORCID,Sheng Bin7ORCID

Affiliation:

1. School of Computer Science and Hubei Provincial Collaborative Innovation Center for Basic Education Information Technology Services, Hubei University of Education, China

2. Hubei Key Lab of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology, China and Rapid-rich Object Search Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

3. School of Computer Science and Artificial Intelligence, Wuhan University of Technology, China

4. ROAS Thrust, The Hong Kong University of Science and Technology (Guangzhou) and Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, China

5. Department of Computing and School of Design, The Hong Kong Polytechnic University, China

6. MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, China

7. Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, China

Abstract

Non-autoregressive video captioning methods generate visual words in parallel but often overlook semantic correlations among them, especially regarding verbs, leading to lower caption quality. To address this, we integrate action information of highlighted objects to enhance semantic connections among visual words. Our proposed Action-aware Language Skeleton Optimization network (ALSO-Net) tackles the challenge of extracting action information across frames, improving understanding of complex context-dependent video actions and reducing sentence inconsistencies. ALSO-Net incorporates a linguistic skeleton tag generator to refine semantic correlations and a video action predictor to enhance verb prediction accuracy in video captions. We also address issues of unsatisfactory caption length and quality by jointly optimizing different levels of motion prediction loss. Experimental evaluation on prominent video captioning datasets demonstrates that ALSO-Net outperforms baseline methods by a significant margin and achieves competitive performance compared to state-of-the-art autoregressive methods with smaller model complexity and faster inference time.

Publisher

Association for Computing Machinery (ACM)

Reference70 articles.

1. Discriminative Latent Semantic Graph for Video Captioning

2. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proc. Assoc. Comput. Linguist. Workshops. 65–72.

3. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset

4. VideoTRM: Pre-training for Video Captioning Challenge 2020

5. Retrieval Augmented Convolutional Encoder-decoder Networks for Video Captioning

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3