Abstract
Over the past few years, many impressive lidar-inertial SLAM systems have been developed and perform well under static scenes. However, most tasks are under dynamic environments in real life, and the determination of a method to improve accuracy and robustness poses a challenge. In this paper, we propose a semantic lidar-inertial SLAM approach with the combination of a point cloud semantic segmentation network and lidar-inertial SLAM LIO mapping for dynamic scenes. We import an attention mechanism to the PointConv network to build an attention weight function to improve the capacity to predict details. The semantic segmentation results of the point clouds from lidar enable us to obtain point-wise labels for each lidar frame. After filtering the dynamic objects, the refined global map of the lidar-inertial SLAM sytem is clearer, and the estimated trajectory can achieve a higher precision. We conduct experiments on an UrbanNav dataset, whose challenging highway sequences have a large number of moving cars and pedestrians. The results demonstrate that, compared with other SLAM systems, the accuracy of trajectory can be improved to different degrees.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference20 articles.
1. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age
2. A Tutorial on Graph-Based SLAM
3. CPFG-SLAM: A robust simultaneous localization and mapping based onLIDAR in off-road environment;Ji;Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV),2018
4. Loam: Lidar odometry and mapping in real-time;Zhang;Proceedings of the 2014 Robotics: Science and Systems,2014
5. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain;Shan;Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),2018
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献