Affiliation:
1. Department of Computer Science, University of Tabriz, Tabriz 51666-16471, Iran
Abstract
In recent years, implementing reinforcement learning in autonomous mobile robots (AMRs) has become challenging. Traditional methods face complex trials, long convergence times, and high computational requirements. This paper introduces an innovative strategy using a customized spiking neural network (SNN) for autonomous learning and control of mobile robots (AMR) in unknown environments. The model combines spike-timing-dependent plasticity (STDP) with dopamine modulation for learning. It utilizes the Izhikevich neuron model, leading to biologically inspired and computationally efficient control systems that adapt to changing environments. The performance of the model is evaluated in a simulated environment, replicating real-world scenarios with obstacles. In the initial training phase, the model faces significant challenges. Integrating brain-inspired learning, dopamine, and the Izhikevich neuron model adds complexity. The model achieves an accuracy rate of 33% in reaching its target during this phase. Collisions with obstacles occur 67% of the time, indicating the struggle of the model to adapt to complex obstacles. However, the model’s performance improves as the study progresses to the testing phase after the robot has learned. Its accuracy surges to 94% when reaching the target, and collisions with obstacles reduce it to 6%. This shift demonstrates the adaptability and problem-solving capabilities of the model in the simulated environment, making it more competent for real-world applications.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference42 articles.
1. Teaching a humanoid robot to walk faster through Safe Reinforcement Learning;Shafie;Eng. Appl. Artif. Intell.,2020
2. Tactical driving decisions of unmanned ground vehicles in complex highway environments: A deep reinforcement learning approach;Wang;Proc. Inst. Mech. Eng. Part D J. Automob. Eng.,2021
3. Adams, C.S., and Rahman, S.M. (2021, January 10–13). Design and Development of an Autonomous Feline Entertainment Robot (AFER) for Studying Animal-Robot Interactions. Proceedings of the SoutheastCon 2021, Atlanta, GA, USA.
4. An innovative bio-inspired flight controller for quad-rotor drones: Quad-rotor drone learning to fly using reinforcement learning;Dooraki;Robot. Auton. Syst.,2021
5. Yarp-ros inter-operation in a 2d navigation task;Randazzo;Front. Robot. AI,2018