Abstract
Mapping the environment is necessary for navigation, planning and manipulation. In this paper, a fusion framework (as data-in-decision-out) is introduced for a 2D LIDAR and a 3D ultrasonic sensor to achieve three-dimensional mapping without expensive 3D LiDAR scanner or visual processing. Two sensor models are proposed for the two sensors used for map updating. Furthermore, 2D/3D map representations are discussed for our fusion approach. We also compare different probabilistic fusion methods and discuss criterias for choosing appropriate methods. Experiments are carried out with a real ground robot platform in an indoor environment. The 2D and 3D map results demonstrate that our approach is able to show the surrounding in more details. Sensor fusion provides a better estimation of the environment and the ego-pose whilst lowering the necessary resources. This gives the robot’s perception of the environment more information by using only one additional low-cost 3D ultrasonic sensor. This is especially important for robust and light-weight robots with limited resources.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference44 articles.
1. Using occupancy grids for mobile robot perception and navigation
2. A Survey of Autonomous Driving: Common Practices and Emerging Technologies
3. Mobile robot map generation by integrating omnidirectional stereo and laser range finder;Miura;IEEE Int. Conf. Intell. Robot. Syst.,2002
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献