Author:
Giorgi Ioanna,Cangelosi Angelo,Masala Giovanni L.
Abstract
Endowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involving abstract words, in artificial agents. In this work, we propose a novel methodology, using a brain-inspired architecture, to model an appropriate mapping of language with the percept and internal motor representation in humanoid robots. This research presents the first robotic instantiation of a complex architecture based on the Baddeley's Working Memory (WM) model. Our proposed method grants a scalable knowledge representation of verbal and non-verbal signals in the cognitive architecture, which supports incremental open-ended learning. Human spoken utterances about the workspace and the task are combined with the internal knowledge map of the robot to achieve task accomplishment goals. We train the robot to understand instructions involving higher-order (abstract) linguistic concepts of developmental complexity, which cannot be directly hooked in the physical world and are not pre-defined in the robot's static self-representation. Our proposed interactive learning method grants flexible run-time acquisition of novel linguistic forms and real-world information, without training the cognitive model anew. Hence, the robot can adapt to new workspaces that include novel objects and task outcomes. We assess the potential of the proposed methodology in verification experiments with a humanoid robot. The obtained results suggest robust capabilities of the model to link language bi-directionally with the physical environment and solve a variety of manipulation tasks, starting with limited knowledge and gradually learning from the run-time interaction with the tutor, past the pre-trained stage.
Subject
Artificial Intelligence,Biomedical Engineering
Reference71 articles.
1. Natural language acquisition and grounding for embodied robotic systems;Alomari;Proceedings of the AAAI Conference on Artificial Intelligence,2017
2. Online learning of concepts and words using multimodal LDA and hierarchical Pitman-Yor language model;Araki,2012
3. Look, listen and learn;Arandjelovic,2017
4. Working memory: theories, models, and controversies;Baddeley;Ann. Rev. Psychol.,2012
5. Robotic roommates making pancakes;Beetz,2011
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Towards Robot Learning from Spoken Language;Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction;2023-03-13
2. Video Relationship Detection Using Mixture of Experts;IEEE Access;2023
3. Safe Distance Monitoring for COVID-19 Using YOLOv3 Object Recognition Paradigm;Proceedings of the NIELIT's International Conference on Communication, Electronics and Digital Technology;2023
4. Simulations of working memory spiking networks driven by short-term plasticity;Frontiers in Integrative Neuroscience;2022-10-03