Affiliation:
1. The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing 100190, China
Abstract
Active mapping is an important technique for mobile robots to autonomously explore and recognize indoor environments. View planning, as the core of active mapping, determines the quality of the map and the efficiency of exploration. However, most current view-planning methods focus on low-level geometric information like point clouds and neglect the indoor objects that are important for human–robot interaction. We propose a novel View-Planning method for indoor active Sparse Object Mapping (VP-SOM). VP-SOM takes into account for the first time the properties of object clusters in the coexisting human–robot environment. We categorized the views into global views and local views based on the object cluster, to balance the efficiency of exploration and the mapping accuracy. We developed a new view-evaluation function based on objects’ information abundance and observation continuity, to select the Next-Best View (NBV). Especially for calculating the uncertainty of the sparse object model, we built the object surface occupancy probability map. Our experimental results demonstrated that our view-planning method can explore the indoor environments and build object maps more accurately, efficiently, and robustly.
Funder
National Key Research and Development Program of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference37 articles.
1. Rünz, M., and Agapito, L. (June, January 29). Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.
2. Runz, M., Buffier, M., and Agapito, L. (2018, January 16–20). Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects. Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany.
3. Cubeslam: Monocular 3-d object slam;Yang;IEEE Trans. Robot.,2019
4. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam;Nicholson;IEEE Robot. Autom. Lett.,2018
5. So-slam: Semantic object slam with scale proportional and symmetrical texture constraints;Liao;IEEE Robot. Autom. Lett.,2022