Abstract
Purpose
This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global positioning system (GPS) signal failure in buildings, trees and other obstacles.
Design/methodology/approach
In this framework, a feature extraction method distributes features on the image under texture-less scenes. The assumption of constant luminosity is improved, and the features are tracked by the optical flow to enhance the stability of the system. The camera data and inertial measurement unit data are tightly coupled to estimate the pose by nonlinear optimization.
Findings
The method is successfully performed on the mobile robot and steadily extracts the features on low texture environments and tracks features. The end-to-end error is 1.375 m with respect to the total length of 762 m. The authors achieve better relative pose error, scale and CPU load than ORB-SLAM2 on EuRoC data sets.
Originality/value
The main contribution of this study is the theoretical derivation and experimental application of a new visual-inertial SLAM method that has excellent accuracy and stability on weak texture scenes.
Subject
Industrial and Manufacturing Engineering,Computer Science Applications,Control and Systems Engineering
Reference27 articles.
1. Lucas-Kanade 20 years on: a unifying framework;International Journal of Computer Vision,2004
2. The EuRoC micro aerial vehicle datasets;International Journal of Robotics Research,2016
3. Visual-inertial direct Slam,2016
4. Large-scale direct Slam with stereo cameras,2015
5. Inertial aided dense & semi-dense methods for robust direct visual odometry,2016
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献