Affiliation:
1. School of Aeronautics, Chongqing Jiaotong University, Chongqing 404100, China
2. School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 404100, China
Abstract
Unmanned aerial vehicles (UAVs) are increasingly deployed to enhance the operational efficiency of city services. However, finding optimal solutions for the gather–return task pattern under dynamic environments and the energy constraints of UAVs remains a challenge, particularly in dense high-rise building areas. This paper investigates the multi-UAV path planning problem, aiming to optimize solutions and enhance data gathering rates by refining exploration strategies. Initially, for the path planning problem, a reinforcement learning (RL) technique equipped with an environment reset strategy is adopted, and the data gathering problem is modeled as a maximization problem. Subsequently, to address the limitations of stationary distribution in indicating the short-term behavioral patterns of agents, a Time-Adaptive Distribution is proposed, which evaluates and optimizes the policy by combining the behavioral characteristics of agents across different time scales. This approach is particularly suitable for the early stages of learning. Furthermore, the paper describes and defines the “Narrow-Elongated Path” Problem (NEP-Problem), a special spatial configuration in RL environments that hinders agents from finding optimal solutions through random exploration. To address this, a Robust-Optimization Exploration Strategy is introduced, leveraging expert knowledge and robust optimization to ensure UAVs can deterministically reach and thoroughly explore any target areas. Finally, extensive simulation experiments validate the effectiveness of the proposed path planning algorithms and comprehensively analyze the impact of different exploration strategies on data gathering efficiency.
Funder
National Natural Science Foundation of China