Affiliation:
1. National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, China
2. Naval Research Academy, China
Abstract
This paper proposes a novel guidance law for intercepting a high-speed maneuvering target based on deep reinforcement learning, which mainly includes the interceptor–target relative motion model and value function approximation model based on deep Q-Network (DQN) with prioritized experience replay. First, a method called prioritized experience replay is applied to extract more efficient samples and reduce the training time. Second, to cope with the discrete action space of DQN, a normal acceleration is introduced to the state space, and the normal acceleration rate is chosen as the action. Then, the continuous normal acceleration command is obtained using numerical integral method. Third, to make the line-of-sight (LOS) rate converge rapidly, the reward function whose absolute value tends to zero has been constructed. Finally, compared with proportional navigation guidance (PNG) and the Q-Learning-based guidance law (QLG), the simulation experiments are implemented to intercept high-speed maneuvering targets at different acceleration policies. Simulation results demonstrate that the proposed DQN-based guidance law (DQNG) can obtain continuous acceleration command, make the LOS rate converge to zero rapidly, and hit the maneuvering targets using only the LOS rate. It also confirms that DQNG can realize the parallel-like approach and improve the interception performance of the interceptor to high-speed maneuvering targets. The proposed DQNG also has the advantages of avoiding the complicated formula derivation of traditional guidance law and eliminates the acceleration buffeting.
Funder
Postgraduate Research and Practice Innovation Program of Jiangsu Province
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献