Author:
Damen Dima,Doughty Hazel,Farinella Giovanni Maria,Furnari Antonino,Kazakos Evangelos,Ma Jian,Moltisanti Davide,Munro Jonathan,Perrett Toby,Price Will,Wray Michael
Abstract
AbstractThis paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (Damen in Scaling egocentric vision: ECCV, 2018), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection enables new challenges such as action detection and evaluating the “test of time”—i.e. whether models trained on data collected in 2018 can generalise to new footage collected two years later. The dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics.
Funder
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference131 articles.
1. Bearman A, Russakovsky O, Ferrari V, & Fei-Fei L (2016) What’s the point: semantic segmentation with point supervision. In ECCV
2. Bhattacharyya A, Fritz M, & Schiele B (2019) Bayesian prediction of future street scenes using synthetic likelihoods. In ICLR
3. Bojanowski P, Lajugie R, Bach F, Laptev I, Ponce J, Schmid C, & Sivic J (2014) Weakly supervised action labeling in videos under ordering constraints. In ECCV
4. Caesar H, Bankiti V, Lang AH, Vora S, Liong VE, Xu Q, Krishnan A, Pan Y, Baldan G, & Beijbom O (2019) nuScenes: A multimodal dataset for autonomous driving. arXiv
5. Cao Y, Long M, Wang J, & Yu P (2017) Correlation hashing network for efficient cross-modal retrieval. In BMVC
Cited by
145 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献