Affiliation:
1. College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China
2. Hunan Key Laboratory of Intelligent Planning and Simulation for Aerospace Mission, Changsha 410073, China
Abstract
Traditional unmanned aerial vehicle path planning methods focus on addressing planning issues in static scenes, struggle to balance optimality and real-time performance, and are prone to local optima. In this paper, we propose an improved deep reinforcement learning approach for UAV path planning in dynamic scenarios. Firstly, we establish a task scenario including an obstacle assessment model and model the UAV’s path planning problem using the Markov Decision Process. We translate the MDP model into the framework of reinforcement learning and design the state space, action space, and reward function while incorporating heuristic rules into the action exploration policy. Secondly, we utilize the Q function approximation of an enhanced D3QN with a prioritized experience replay mechanism and design the algorithm’s network structure based on the TensorFlow framework. Through extensive training, we obtain reinforcement learning path planning policies for both static and dynamic scenes and innovatively employ a visualized action field to analyze their planning effectiveness. Simulations demonstrate that the proposed algorithm can accomplish UAV dynamic scene path planning tasks and outperforms classical methods such as A*, RRT, and DQN in terms of planning effectiveness.
Funder
National Natural Science Foundation of China
Reference34 articles.
1. Automatic control for aerobatic maneuvering of agile fixed-wing UAVs;Bulka;J. Intell. Robot. Syst.,2019
2. A review of research on unmanned aerial vehicle path planning algorithms;Chen;Aerodyn. Missile J.,2020
3. Application of improved A* algorithm in robot path planning;Chen;Electron. Des. Eng.,2014
4. Research of path planning algorithm based on improved artificial potential field;Liu;J. Shenyang Ligong Univ.,2017
5. LaValle, S. (1998). Rapidly-exploring random trees: A new tool for path planning. Res. Rep. 9811, 293–308.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献