Affiliation:
1. Georgia Institute of Technology, USA
2. University of Texas at Austin, USA
Abstract
When transferring a learned task to an environment containing new objects, a core problem is identifying the mapping between objects in the old and new environments. This object mapping is dependent on the task being performed and the roles objects play in that task. Prior work assumes (i) the robot has access to multiple new demonstrations of the task or (ii) the primary features for object mapping have been specified. We introduce an approach that is not constrained by either assumption but rather uses structured interaction with a human teacher to infer an object mapping for task transfer. We describe three experiments: an extensive evaluation of assisted object mapping in simulation, an interactive evaluation incorporating demonstration and assistance data from a user study involving 10 participants, and an offline evaluation of the robot’s confidence during object mapping. Our results indicate that human-guided object mapping provided a balance between mapping performance and autonomy, resulting in (i) up to 2.25× as many correct object mappings as mapping without human interaction, and (ii) more efficient transfer than requiring the human teacher to re-demonstrate the task in the new environment, correctly inferring the object mapping across 93.3% of the tasks and requiring at most one interactive assist in the typical case.
Funder
National Science Foundation
Office of Naval Research
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献