Author:
Zheng Zengrui,Su Kainan,Lin Shifeng,Fu Zhiquan,Yang Chenguang
Abstract
Purpose
Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.
Design/methodology/approach
This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.
Findings
The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.
Originality/value
This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.
Reference113 articles.
1. Real-time multi-SLAM system for agent localization and 3D mapping in dynamic scenarios,2020
2. Lidar-aided camera feature tracking and visual slam for spacecraft low-orbit navigation and planetary landing,2015
3. Role of deep learning in loop closure detection for visual and lidar SLAM: a survey;Sensors,2021
4. DynaSLAM II: tightly-Coupled Multi-Object tracking and SLAM;IEEE Robotics and Automation Letters,2021