Author:
Fuchs Stefan,Belardinelli Anna
Abstract
Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems.
Subject
Artificial Intelligence,Biomedical Engineering
Reference53 articles.
1. Motion intention recognition in robot assisted applications;Aarno;Robot. Auton. Syst,2008
2. Predicting user intent through eye gaze for shared autonomy;Admoni,2016
3. Gaze for error detection during human-robot shared manipulation;Aronson,2018
4. Eye-hand behavior in human-robot shared manipulation;Aronson,2018
5. Toward a framework for levels of robot autonomy in human-robot interaction;Beer;J. Hum. Robot Interact,2014
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献