Object Priors for Classifying and Localizing Unseen Actions
-
Published:2021-04-19
Issue:6
Volume:129
Page:1954-1971
-
ISSN:0920-5691
-
Container-title:International Journal of Computer Vision
-
language:en
-
Short-container-title:Int J Comput Vis
Author:
Mettes PascalORCID, Thong William, Snoek Cees G. M.
Abstract
AbstractThis work strives for the classification and localization of human actions in videos, without the need for any labeled video training examples. Where existing work relies on transferring global attribute or object information from seen to unseen action videos, we seek to classify and spatio-temporally localize unseen actions in videos from image-based object information only. We propose three spatial object priors, which encode local person and object detectors along with their spatial relations. On top we introduce three semantic object priors, which extend semantic matching through word embeddings with three simple functions that tackle semantic ambiguity, object discrimination, and object naming. A video embedding combines the spatial and semantic object priors. It enables us to introduce a new video retrieval task that retrieves action tubes in video collections based on user-specified objects, spatial relations, and object size. Experimental evaluation on five action datasets shows the importance of spatial and semantic object priors for unseen actions. We find that persons and objects have preferred spatial relations that benefit unseen action localization, while using multiple languages and simple object filtering directly improves semantic matching, leading to state-of-the-art results for both unseen action classification and localization.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference96 articles.
1. Afouras, T., Owens, A., Chung, J. S., & Zisserman, A. (2020). Self-supervised learning of audio-visual objects from video. In ECCV. 2. Alayrac, J. B., Recasens, A., Schneider, R., Arandjelović, R., Ramapuram, J., De Fauw, J., Smaira, L., Dieleman, S., & Zisserman, A. (2020). Self-supervised multimodal versatile networks. In NeurIPS. 3. Alexiou, I., Xiang, T., & Gong, S. (2016). Exploring synonyms as context in zero-shot action recognition. In ICIP. 4. An, R., Miao, Z., Li, Q., Xu, W., & Zhang, Q. (2019). Spatiotemporal visual-semantic embedding network for zero-shot action recognition. Journal of Electronic Imaging, 28(2), 93. 5. Asano, Y. M., Patrick, M., Rupprecht, C., & Vedaldi, A. (2020). Labelling unlabelled videos from scratch with multi-modal self-supervision. In NeurIPS.
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|