Author:
Botteghi N.,Sirmacek B.,Schulte R.,Poel M.,Brune C.
Abstract
Abstract. In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. AcTExplore: Active Tactile Exploration on Unknown Objects;2024 IEEE International Conference on Robotics and Automation (ICRA);2024-05-13
2. Coupling Effect of Exploration Rate and Learning Rate for Optimized Scaled Reinforcement Learning;SN Computer Science;2023-08-25
3. A Reinforcement Learning (RL)-Based Hybrid Search Method for Hidden Object Discovery using GPR;2023 IEEE International Conference on Advanced Systems and Emergent Technologies (IC_ASET);2023-04-29
4. Artificial Intelligence in Smart Logistics Cyber-Physical Systems: State-of-The-Arts and Potential Applications;IEEE Transactions on Industrial Cyber-Physical Systems;2023
5. DA-SLAM: Deep Active SLAM based on Deep Reinforcement Learning;2022 Latin American Robotics Symposium (LARS), 2022 Brazilian Symposium on Robotics (SBR), and 2022 Workshop on Robotics in Education (WRE);2022-10-18