Abstract
The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertial sensors are noisy. The proposed system solves certain disadvantages of odometry-based systems such as erroneous pose estimation due to incorrect feature selection or losing track due to abrupt camera motion and provides a more accurate result. We perform feasibility tests on real robot autonomy and extensively validate the accuracy of CORB2I-SLAM on benchmark data sequences. We also evaluate its scalability and applicability in terms of the number of participating robots and network requirements, respectively.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference50 articles.
1. Visual Odometry [Tutorial]
2. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age
3. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras
4. Edge slam: Edge points based monocular visual slam;Maity;Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops,2017
5. Direct Monocular Odometry Using Points and Lines;Yang;Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),2017
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献