Author:
Kartmann Rainer,Asfour Tamim
Abstract
Humans use semantic concepts such as spatial relations between objects to describe scenes and communicate tasks such as “Put the tea to the right of the cup” or “Move the plate between the fork and the spoon.” Just as children, assistive robots must be able to learn the sub-symbolic meaning of such concepts from human demonstrations and instructions. We address the problem of incrementally learning geometric models of spatial relations from few demonstrations collected online during interaction with a human. Such models enable a robot to manipulate objects in order to fulfill desired spatial relations specified by verbal instructions. At the start, we assume the robot has no geometric model of spatial relations. Given a task as above, the robot requests the user to demonstrate the task once in order to create a model from a single demonstration, leveraging cylindrical probability distribution as generative representation of spatial relations. We show how this model can be updated incrementally with each new demonstration without access to past examples in a sample-efficient way using incremental maximum likelihood estimation, and demonstrate the approach on a real humanoid robot.
Funder
Carl-Zeiss-Stiftung
Bundesministerium für Bildung und Forschung
Subject
Artificial Intelligence,Computer Science Applications
Reference37 articles.
1. Learning the semantics of object–action relations by observation;Aksoy;Int. J. Robotics Res.,2011
2. The karlsruhe ARMAR humanoid robot family;Asfour,2017
3. ARMAR-6: A high-performance humanoid for human-robot collaboration in real world scenarios;Asfour;Robotics Automation Mag.,2019
4. CXCR7 suppression modulates microglial chemotaxis to ameliorate experimentally-induced autoimmune encephalomyelitis;Bao,2016
5. Asking follow-up clarifications to resolve ambiguities in human-robot conversation;Doğan,2022