Abstract
The problem of simultaneous localization and mapping (SLAM) in mobile robotics currently remains a crucial issue to ensure the safety of autonomous vehicles’ navigation. One approach addressing the SLAM problem and odometry estimation has been through perception sensors, leading to V-SLAM and visual odometry solutions. Furthermore, for these purposes, computer vision approaches are quite widespread, but LiDAR is a more reliable technology for obstacles detection and its application could be broadened. However, in most cases, definitive results are not achieved, or they suffer from a high computational load that limits their operation in real time. Deep Learning techniques have proven their validity in many different fields, one of them being the perception of the environment of autonomous vehicles. This paper proposes an approach to address the estimation of the ego-vehicle positioning from 3D LiDAR data, taking advantage of the capabilities of a system based on Machine Learning models, analyzing possible limitations. Models have been used with two real datasets. Results provide the conclusion that CNN-based odometry could guarantee local consistency, whereas it loses accuracy due to cumulative errors in the evaluation of the global trajectory, so global consistency is not guaranteed.
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference29 articles.
1. An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics
2. Visual Odometry on the Mars Exploration Rovers;Cheng;Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics,2005
3. Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles
4. Low-drift and real-time lidar odometry and mapping
5. IMLS-SLAM: Scan-to-Model Matching Based on 3D Data;Deschaud;Proceedings of the IEEE International Conference on Robotics and Automation,2018
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献