Author:
KOU Kai,YANG Gang,ZHANG Wenqi,LIU Xincheng,YAO Yuan,ZHOU Xingshe
Abstract
The existing deep reinforced learning algorithms cannot see local environments and have insufficient perceptual information on UAV autonomous navigation tasks. The paper investigates the UAV's autonomous navigation tasks in its unknown environments based on the nondeterministic policy soft actor-critic (SAC) reinforced learning model. Specifically, the paper proposes a policy network based on a memory enhancement mechanism, which integrates the historical memory information processing with current observations to extract the temporal dependency of the statements so as to enhance the state estimation ability under locally observable conditions and avoid the learning algorithm from falling into a locally optimal solution. In addition, a non-sparse reward function is designed to reduce the challenge of the reinforced learning strategy to converge under sparse reward conditions. Finally, several complex scenarios are trained and validated in the Airsim+UE4 simulation platform. The experimental results show that the proposed method has a navigation success rate 10% higher than that of the benchmark algorithm and that the average flight distance is 21% shorter, which effectively enhances the stability and convergence of the UAV autonomous navigation algorithm.