Author:
Van de Maele Toon,Verbelen Tim,Çatal Ozan,Dhoedt Bart
Abstract
Scene understanding and decomposition is a crucial challenge for intelligent systems, whether it is for object manipulation, navigation, or any other task. Although current machine and deep learning approaches for object detection and classification obtain high accuracy, they typically do not leverage interaction with the world and are limited to a set of objects seen during training. Humans on the other hand learn to recognize and classify different objects by actively engaging with them on first encounter. Moreover, recent theories in neuroscience suggest that cortical columns in the neocortex play an important role in this process, by building predictive models about objects in their reference frame. In this article, we present an enactive embodied agent that implements such a generative model for object interaction. For each object category, our system instantiates a deep neural network, called Cortical Column Network (CCN), that represents the object in its own reference frame by learning a generative model that predicts the expected transform in pixel space, given an action. The model parameters are optimized through the active inference paradigm, i.e., the minimization of variational free energy. When provided with a visual observation, an ensemble of CCNs each vote on their belief of observing that specific object category, yielding a potential object classification. In case the likelihood on the selected category is too low, the object is detected as an unknown category, and the agent has the ability to instantiate a novel CCN for this category. We validate our system in an simulated environment, where it needs to learn to discern multiple objects from the YCB dataset. We show that classification accuracy improves as an embodied agent can gather more evidence, and that it is able to learn about novel, previously unseen objects. Finally, we show that an agent driven through active inference can choose their actions to reach a preferred observation.
Funder
Agentschap Innoveren en Ondernemen
Fonds Wetenschappelijk Onderzoek
Subject
Artificial Intelligence,Biomedical Engineering
Reference69 articles.
1. MONet: unsupervised scene decomposition and representation;Burgess;arXiv [Preprint] arXiv,2019
2. The ycb object and model set: towards common benchmarks for manipulation research;Calli,2015
3. Learning generative state space models for active inference;Çatal;Front. Comput. Neurosci.
4. Learning generative state space models for active inference;Çatal;Front. Comput. Neurosci.
5. ChenC.
DengF.
AhnS.
ROOTS: object-centric representation and rendering of 3D scenes2021
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献