Ani-GIFs: A benchmark dataset for domain generalization of action recognition from GIFs
-
Published:2022-09-26
Issue:
Volume:4
Page:
-
ISSN:2624-9898
-
Container-title:Frontiers in Computer Science
-
language:
-
Short-container-title:Front. Comput. Sci.
Author:
Majumdar Shoumik Sovan,Jain Shubhangi,Tourni Isidora Chara,Mustafin Arsenii,Lteif Diala,Sclaroff Stan,Saenko Kate,Bargal Sarah Adel
Abstract
Deep learning models perform remarkably well for the same task under the assumption that data is always coming from the same distribution. However, this is generally violated in practice, mainly due to the differences in data acquisition techniques and the lack of information about the underlying source of new data. Domain generalization targets the ability to generalize to test data of an unseen domain; while this problem is well-studied for images, such studies are significantly lacking in spatiotemporal visual content—videos and GIFs. This is due to (1) the challenging nature of misalignment of temporal features and the varying appearance/motion of actors and actions in different domains, and (2) spatiotemporal datasets being laborious to collect and annotate for multiple domains. We collect and present the first synthetic video dataset of Animated GIFs for domain generalization, Ani-GIFs, that is used to study the domain gap of videos vs. GIFs, and animated vs. real GIFs, for the task of action recognition. We provide a training and testing setting for Ani-GIFs, and extend two domain generalization baseline approaches, based on data augmentation and explainability, to the spatiotemporal domain to catalyze research in this direction.
Publisher
Frontiers Media SA
Subject
Computer Science Applications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Computer Science (miscellaneous)
Reference79 articles.
1. Distribution-matching embedding for visual domain adaptation;Baktashmotlagh;J. Mach. Learn. Res,2016
2. Recognition in terra incognita,;Beery,2018
3. Gan augmentation: Augmenting training data using generative adversarial networks;Bowles,2018
4. Large scale labelled video data augmentation for semantic segmentation in driving scenarios,;Budvytis,2017
5. Quo vadis, action recognition? A new model and the kinetics dataset;Carreira,2018
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献