Affiliation:
1. School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2. Shaanxi Province Key Laboratory of Flight Control and Simulation Technology, Xi’an 710072, China
Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance.
Funder
National Natural Science Foundation of China
Aeronautical Science Foundation of China
Reference33 articles.
1. Lu, X., Wang, H., Tang, S., Huang, H., and Li, C. (2020). DM-SLAM: Monocular SLAM in dynamic environments. Appl. Sci., 10.
2. Improving RGB-D SLAM in dynamic environments: A motion removal approach;Sun;Robot. Auton. Syst.,2017
3. Chum, O., and Matas, J. (2005, January 20–26). Matching with PROSAC-progressive sample consensus. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA.
4. Vins-mono: A robust and versatile monocular visual-inertial state estimator;Qin;IEEE Trans. Robot.,2018
5. Motion removal for reliable RGB-D SLAM in dynamic environments;Sun;Robot. Auton. Syst.,2018
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献