Abstract
Path planning is significant in the field of artificial intelligence and robotics. This paper proposes a unique map optimization of path planning relying on Q-learning to overcome the shortcomings of classic Q-learning, such as delayed convergence or low efficiency. First, improvements were made to the setup environment, turning a simple environment into a more complex one. Secondly, rewards were set to ensure that each step is optimal exploration. The optimal path is the globally optimal path by setting up, down, left, and right directions simultaneously. Finally, MATLAB simulation was used for verification. As compared to the original training environment, the improved map enhances learning efficiency in a more complicated environment, increases the algorithm's convergence rate, and enables the robot to swiftly discover the collection-free path and finish the job in a complex environment. The rationality of the improvement is verified, which provides important data and a theoretical basis for the subsequent research on Q-learning.
Publisher
Darcy & Roy Press Co. Ltd.
Reference16 articles.
1. X. Cui, Z. Liu, et al. "Research on Mobile Robot path Planning Based on improved QLearning," ICMLCA 2021
2. 2nd International Conference on Machine Learning and Computer Application, Shenyang, China, 2021, pp. 1-5.
3. Gao X, Wu H, Zhai L, et al. A rapidly exploring random tree optimization algorithm for space robotic manipulators guided by obstacle avoidance independent potential field. International Journal of Advanced Robotic Systems. 2018; 15(3).
4. Lu Cao, Dong Qiao, Jingwen Xu, Suboptimal artificial potential function sliding mode control for spacecraft rendezvous with obstacle avoidance, Acta Astronautica, Volume 143, 2018, Pages 133-146, ISSN 0094-5765.
5. C. -F. Juang and Y. -T. Yeh, "Multiobjective Evolution of Biped Robot Gaits Using Advanced Continuous Ant-Colony Optimized Recurrent Neural Networks," in IEEE Transactions on Cybernetics, vol. 48, no. 6, pp. 1910-1922, June 2018.