LD-SLAM: A Robust and Accurate GNSS-Aided Multi-Map Method for Long-Distance Visual SLAM
-
Published:2023-09-09
Issue:18
Volume:15
Page:4442
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Li Dongdong1, Zhang Fangbing1, Feng Jiaxiao1, Wang Zhijun1, Fan Jinghui1, Li Ye1, Li Jing2ORCID, Yang Tao1ORCID
Affiliation:
1. National Engineering Laboratory for Integrated Aerospace-Ground-Ocean Big Data Application Technology, Shaanxi Key Laboratory of Speech and Image Information Processing, School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China 2. School of Telecommunications Engineering, Xidian University, Xi’an 710126, China
Abstract
Continuous, robust, and precise localization is pivotal in enabling the autonomous operation of robots and aircraft in intricate environments, particularly in the absence of GNSS (global navigation satellite system) signals. However, commonly employed approaches, such as visual odometry and inertial navigation systems, encounter hindrances in achieving effective navigation and positioning due to issues of error accumulation. Additionally, the challenge of managing extensive map creation and exploration arises when deploying these systems on unmanned aerial vehicle terminals. This study introduces an innovative system capable of conducting long-range and multi-map visual SLAM (simultaneous localization and mapping) using monocular cameras equipped with pinhole and fisheye lens models. We formulate a graph optimization model integrating GNSS data and graphical information through multi-sensor fusion navigation and positioning technology. We propose partitioning SLAM maps based on map health status to augment accuracy and resilience in large-scale map generation. We introduce a multi-map matching and fusion algorithm leveraging geographical positioning and visual data to address excessive discrete mapping, leading to resource wastage and reduced map-switching efficiency. Furthermore, a multi-map-based visual SLAM online localization algorithm is presented, adeptly managing and coordinating distinct geographical maps in different temporal and spatial domains. We employ a quadcopter to establish a testing system and generate an aerial image dataset spanning several kilometers. Our experiments exhibit the framework’s noteworthy robustness and accuracy in long-distance navigation. For instance, our GNSS-assisted multi-map SLAM achieves an average accuracy of 1.5 m within a 20 km range during unmanned aerial vehicle flights.
Funder
National Natural Science of China
Subject
General Earth and Planetary Sciences
Reference59 articles.
1. Buehler, M., Iagnemma, K., and Singh, S. (2009). The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, Springer. 2. Wan, G., Yang, X., Cai, R., Li, H., Zhou, Y., Wang, H., and Song, S. (2018, January 21–25). Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. Proceedings of the 2018 IEEE International Conference on robotics and Automation (ICRA), Brisbane, QLD, Australia. 3. Meng, X., Wang, H., and Liu, B. (2017). A robust vehicle localization approach based on gnss/imu/dmi/lidar sensor fusion for autonomous vehicles. Sensors, 17. 4. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age;Cadena;IEEE Trans. Robot.,2016 5. ORB-SLAM: A versatile and accurate monocular SLAM system;Montiel;IEEE Trans. Robot.,2015
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|