Abstract
Automatic navigation with collision‐free navigation has become a critical challenge for unmanned surface vehicles (USVs) to expand their application scenarios. Conventional methods for achieving automatic navigation of USVs typically rely on finely modeling the environment, thus exhibiting poor generalization capabilities. Methods based on deep reinforcement learning possess powerful learning abilities and have achieved promising results in USV‐automatic navigation‐tasks. However, the increase in the complexity of network model structures has led to instability during the training process. Therefore, generating more robust navigation strategies, namely ensuring robust reward‐score trends during training and smoother action trajectories of the USV, is crucial for automatic navigation and constitutes the main research question of this study. In this paper, an improved deep deterministic policy gradient (DDPG) algorithm has been proposed for stable automatic navigation of USVs in complex environments. In this algorithm, first, we construct a stable training framework that incorporates the stable feature‐sharing module with constrained gradient backpropagation, which bolsters the USV’s scene memorization capacity, reduces model training fluctuations during navigation policy learning, and improves the training stability of the navigation model. Second, we ensure the decision adaptability of the USV by constraining the extent of action change between adjacent time‐steps by using a reward‐function, which improves the USV‐action smoothly. Finally, we design typical USV‐automatic‐navigation‐scenarios to validate the performance of the Algorithm. Experimental results validate our algorithm’s capability to achieve collision‐free navigation, outperforming the traditional DDPG algorithm in terms of convergence speed, effective sailing distance, and rudder angle maneuver consumption, among other performance metrics.
Funder
Anhui Polytechnic University
Natural Science Foundation of Anhui Province
Anhui University