Abstract
AbstractEfficient action prediction is of central importance for the fluent workflow between humans and equally so for human-robot interaction. To achieve prediction, actions can be algorithmically encoded by a series of events, where every event corresponds to a change in a (static or dynamic) relation between some of the objects in the scene. These structures are similar to a context-free grammar and, importantly, within this framework the actual objects are irrelevant for prediction, only their relational changes matter. Manipulation actions and others can be uniquely encoded this way. Using a virtual reality setup and testing several different manipulation actions, here we show that humans predict actions in an event-based manner following the sequence of relational changes. Testing this with chained actions, we measure the percentage predictive temporal gain for humans and compare it to action-chains performed by robots showing that the gain is approximately equal. Event-based and, thus, object independent action recognition and prediction may be important for cognitively deducing properties of unknown objects seen in action, helping to address bootstrapping of object knowledge especially in infants.
Publisher
Springer Science and Business Media LLC
Reference35 articles.
1. Kappeler, P. & Schaik, C. Cooperation in primates and humans: Mechanisms and evolution. Springer (2006).
2. Cao, Y. et al. Recognize human activities from partially observed videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2658–2665 (2013).
3. Lan, T., Chen, T.-C. & Savarese, S. A hierarchical representation for future action prediction. In European Conference on Computer Vision, pp. 689–704, Springer, (2014).
4. Ryoo, M. S. Human activity prediction: Early recognition of ongoing activities from streaming videos. in Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 1036–1043 (2011).
5. Gupta, A., Kembhavi, A. & Davis, L. S. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(10), 1775–1789 (2009).
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Action Segmentation in the Brain: The Role of Object–Action Associations;Journal of Cognitive Neuroscience;2024
2. Para-functional engineering: cognitive challenges;International Journal of Parallel, Emergent and Distributed Systems;2022-03-21
3. Learning for action-based scene understanding;Advanced Methods and Deep Learning in Computer Vision;2022
4. Predicting Human Actions in the Assembly Process for Industry 4.0;16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021);2021-09-23
5. Deep Embedding Features for Action Recognition on Raw Depth Maps;Computational Science – ICCS 2021;2021