Affiliation:
1. College of Electrical Engineering, Sichuan University, Chengdu 610065, China
Abstract
Aiming at the problems of security, high repetition rate, and many restrictions of multirobot coverage path planning (MCPP) in an unknown environment, Deep Q-Network (DQN) is selected as a part of the method in this paper after considering its powerful approximation ability to the optimal action value function. Then, a deduction method and some environments handling methods are proposed to improve the performance of the decision-making stage. The deduction method assumes the movement direction of each robot and counts the reward value obtained by the robots in this way and then determines the actual movement directions combined with DQN. For these reasons, the whole algorithm is divided into two parts: offline training and online decision-making. Online decision-making relies on the sliding-view method and probability statistics to deal with the nonstandard size and unknown environments and the deduction method to improve the efficiency of coverage. Simulation results show that the performance of the proposed online method is close to that of the offline algorithm which needs long time optimization, and the proposed method is more stable as well. Some performance defects of current MCPP methods in an unknown environment are ameliorated in this study.
Funder
Sichuan Province Science and Technology Support Program
Subject
General Computer Science,Control and Systems Engineering
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献