A Method for Multi-AUV Cooperative Area Search in Unknown Environment Based on Reinforcement Learning

Author:

Li Yueming1,Ma Mingquan1,Cao Jian1,Luo Guobin1,Wang Depeng1,Chen Weiqiang1

Affiliation:

1. National Key Laboratory of Autonomous Marine Vehicle Technology, Harbin Engineering University, Harbin 150001, China

Abstract

As an emerging direction of multi-agent collaborative control technology, multiple autonomous underwater vehicle (multi-AUV) cooperative area search technology has played an important role in civilian fields such as marine resource exploration and development, marine rescue, and marine scientific expeditions, as well as in military fields such as mine countermeasures and military underwater reconnaissance. At present, as we continue to explore the ocean, the environment in which AUVs perform search tasks is mostly unknown, with many uncertainties such as obstacles, which places high demands on the autonomous decision-making capabilities of AUVs. Moreover, considering the limited detection capability of a single AUV in underwater environments, while the area searched by the AUV is constantly expanding, a single AUV cannot obtain global state information in real time and can only make behavioral decisions based on local observation information, which adversely affects the coordination between AUVs and the search efficiency of multi-AUV systems. Therefore, in order to face increasingly challenging search tasks, we adopt multi-agent reinforcement learning (MARL) to study the problem of multi-AUV cooperative area search from the perspective of improving autonomous decision-making capabilities and collaboration between AUVs. First, we modeled the search task as a decentralized partial observation Markov decision process (Dec-POMDP) and established a search information map. Each AUV updates the information map based on sonar detection information and information fusion between AUVs, and makes real-time decisions based on this to better address the problem of insufficient observation information caused by the weak perception ability of AUVs in underwater environments. Secondly, we established a multi-AUV cooperative area search system (MACASS), which employs a search strategy based on multi-agent reinforcement learning. The system combines various AUVs into a unified entity using a distributed control approach. During the execution of search tasks, each AUV can make action decisions based on sonar detection information and information exchange among AUVs in the system, utilizing the MARL-based search strategy. As a result, AUVs possess enhanced autonomy in decision-making, enabling them to better handle challenges such as limited detection capabilities and insufficient observational information.

Funder

National Natural Science Foundation of China

National Key Laboratory of Autonomous Marine Vehicle Technology

Publisher

MDPI AG

Reference45 articles.

1. Advancements in the field of autonomous underwater vehicle;Sahoo;Ocean Eng.,2019

2. A Review of the Path Planning and Formation Control for Multiple Autonomous Underwater Vehicles;Hadi;J. Intell. Robot. Syst.,2021

3. Gafurov, S.A., and Klochkov, E.V. (2015, January 15–20). Autonomous unmanned underwater vehicles development tendencies. Proceedings of the 2nd International Conference on Dynamics and Vibroacoustics of Machines (DVM), Samara, Russia.

4. A survey of underwater search for multi-target using Multi-AUV: Task allocation, path planning, and formation control;Wang;Ocean. Eng.,2023

5. An improved particle swarm optimization based on age factor for multi-AUV cooperative planning;Zhang;Ocean. Eng.,2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3