Author:
Cheng Nuo,Wang Peng,Zhang Guangyuan,Ni Cui,Nematov Erkin
Abstract
IntroductionDeep deterministic policy gradient (DDPG)-based path planning algorithms for intelligent robots struggle to discern the value of experience transitions during training due to their reliance on a random experience replay. This can lead to inappropriate sampling of experience transitions and overemphasis on edge experience transitions. As a result, the algorithm's convergence becomes slower, and the success rate of path planning diminishes.MethodsWe comprehensively examines the impacts of immediate reward, temporal-difference error (TD-error), and Actor network loss function on the training process. It calculates experience transition priorities based on these three factors. Subsequently, using information entropy as a weight, the three calculated priorities are merged to determine the final priority of the experience transition. In addition, we introduce a method for adaptively adjusting the priority of positive experience transitions to focus on positive experience transitions and maintain a balanced distribution. Finally, the sampling probability of each experience transition is derived from its respective priority.ResultsThe experimental results showed that the test time of our method is shorter than that of PER algorithm, and the number of collisions with obstacles is less. It indicated that the determined experience transition priority accurately gauges the significance of distinct experience transitions for path planning algorithm training.DiscussionThis method enhances the utilization rate of transition conversion and the convergence speed of the algorithm and also improves the success rate of path planning.
Subject
Artificial Intelligence,Biomedical Engineering
Reference31 articles.
1. “High-value prioritized experience replay for off-policy reinforcement learning,”;Cao,2019
2. An adaptive clustering-based algorithm for automatic path planning of heterogeneous UAVs;Chen;IEEE Trans. Intell. Transp. Syst,2021
3. Mapless collaborative navigation for a multi-robot system based on the deep reinforcement learning;Chen;Appl. Sci,2019
4. “Off-policy correction for deep deterministic policy gradient algorithms via batch prioritized experience replay,”;Cicek,2021
5. “Mobile robot path planning based on improved DDPG reinforcement learning algorithm,”;Dong,2020