Author:
Wang Xin,Wu Jiawei,Zhang Da,Su Yu,Wang William Yang
Abstract
Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios. Here we introduce a novel task, zeroshot video captioning, that aims at describing out-of-domain videos of unseen activities. Videos of different activities usually require different captioning strategies in many aspects, i.e. word selection, semantic construction, and style expression etc, which poses a great challenge to depict novel activities without paired training data. But meanwhile, similar activities share some of those aspects in common. Therefore, we propose a principled Topic-Aware Mixture of Experts (TAMoE) model for zero-shot video captioning, which learns to compose different experts based on different topic embeddings, implicitly transferring the knowledge learned from seen activities to unseen ones. Besides, we leverage external topic-related text corpus to construct the topic embedding for each activity, which embodies the most relevant semantic vectors within the topic. Empirical results not only validate the effectiveness of our method in utilizing semantic knowledge for video captioning, but also show its strong generalization ability when describing novel activities.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Boosting Semi-Supervised Video Captioning via Learning Candidates Adjusters;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-07-11
2. Chinese Title Generation for Short Videos: Dataset, Metric and Algorithm;IEEE Transactions on Pattern Analysis and Machine Intelligence;2024-07
3. Switchable Novel Object Captioner;IEEE Transactions on Pattern Analysis and Machine Intelligence;2023-01-01
4. Video Captioning Using Deep Learning Approach-A Comprehensive Survey;Proceedings in Adaptation, Learning and Optimization;2023
5. The MSR-Video to Text dataset with clean annotations;Computer Vision and Image Understanding;2022-12