Author:
Polykretis Ioannis,Danielescu Andreea
Abstract
Navigation of mobile agents in unknown, unmapped environments is a critical task for achieving general autonomy. Recent advancements in combining Reinforcement Learning with Deep Neural Networks have shown promising results in addressing this challenge. However, the inherent complexity of these approaches, characterized by multi-layer networks and intricate reward objectives, limits their autonomy, increases memory footprint, and complicates adaptation to energy-efficient edge hardware. To overcome these challenges, we propose a brain-inspired method that employs a shallow architecture trained by a local learning rule for self-supervised navigation in uncharted environments. Our approach achieves performance comparable to a state-of-the-art Deep Q Network (DQN) method with respect to goal-reaching accuracy and path length, with a similar (slightly lower) number of parameters, operations, and training iterations. Notably, our self-supervised approach combines novelty-based and random walks to alleviate the need for objective reward definition and enhance agent autonomy. At the same time, the shallow architecture and local learning rule do not call for error backpropagation, decreasing the memory overhead and enabling implementation on edge neuromorphic processors. These results contribute to the potential of embodied neuromorphic agents utilizing minimal resources while effectively handling variability.
Reference46 articles.
1. Sensor fusion based model for collision free mobile robot navigation;Almasri;Sensors,2015
2. Trajectory planning and collision avoidance algorithm for mobile robotics system;Almasri;IEEE Sensors J.,2016
3. Vector matrix multiplication using crossbar arrays: a comparative analysis;Assaf,2018
4. Neuromorphic computing using non-volatile memory;Burr;Adv. Phys. X,2017
5. Memory system characterization of deep learning workloads;Chishti,2019
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献