Author:
Wang Yang,Fang Yilin,Lou Ping,Yan Junwei,Liu Nianyun
Abstract
Abstract
It is a trend for robots to replace human in industrial fields with the increment of labor cost. Mobile robots are widely used for executing tasks in harsh industrial environment. It is an important problem for mobile robots to plan their path in unknown environment. The ordinary deep Q-network (DQN) which is an efficient method of reinforcement learning has been used for mobile robot path planning in unknown environment, but the DQN generally has low convergence speed. This paper presents a method based on Double DQN (DDQN) with prioritized experience replay (PER) for mobile robot path planning in unknown environment. With sensing its surrounding local information, the mobile robot plans its path with this method in unknown environment. The experiment results show that the proposed method has higher convergence speed and success rate than the normal DQN method at the same experimental environment.
Subject
General Physics and Astronomy
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献