An Onboard Point Cloud Semantic Segmentation System for Robotic Platforms
Author:
Wang Fei1ORCID, Yang Yujie1, Zhou Jingchun1ORCID, Zhang Weishi1ORCID
Affiliation:
1. College of Information Science and Technology, Dalian Maritime University, Dalian 116000, China
Abstract
Point clouds represent an important way for robots to perceive their environments, and can be acquired by mobile robots with LiDAR sensors or underwater robots with sonar sensors. Hence, real-time semantic segmentation of point clouds with onboard edge devices is essential for robots to apprehend their surroundings. In this paper, we propose an onboard point cloud semantic segmentation system for robotic platforms to overcome the conflict between attaining high accuracy of segmentation results and the limited available computational resources of onboard devices. Our system takes raw a sequence of point clouds as inputs, and outputs semantic segmentation results for each frame as well as a reconstructed semantic map of the environment. At the core of our system is the transformer-based hierarchical feature extraction module and fusion module. The two modules are implemented with sparse tensor technologies to speed up inference. The predictions are accumulated according to Bayes rules to generate a global semantic map. Experimental results on the SemanticKITTI dataset show that our system achieves +2.2% mIoU and 18× speed improvements compared with SOTA methods. Our system is able to process 2.2 M points per second on Jetson AGX Xavier (NVIDIA, Santa Clara, USA), demonstrating its applicability to various robotic platforms.
Funder
National Natural Science Foundation of China Dalian Excellent Youth Talent Fund Project Fundamental Research Funds for the Central Universities
Subject
Electrical and Electronic Engineering,Industrial and Manufacturing Engineering,Control and Optimization,Mechanical Engineering,Computer Science (miscellaneous),Control and Systems Engineering
Reference24 articles.
1. Teixeira, M.A.S., Nogueira, R.d.C.M., Dalmedico, N., Santos, H.B., Arruda, L.V.R.d., Neves, F., Pipa, D.R., Ramos, J.E., and Oliveira, A.S.d. (2019). Intelligent 3D Perception System for Semantic Description and Dynamic Interaction. Sensors, 19. 2. DepthLiDAR: Active Segmentation of Environment Depth Map Into Mobile Sensors;Limeira;IEEE Sens. J.,2021 3. DeepSpatial: Intelligent Spatial Sensor to Perception of Things;Teixeira;IEEE Sens. J.,2020 4. Fang, Y., Xu, C., and Cui, Z. (2020). Spatial transformer point convolution. arXiv. 5. Xu, M., Ding, R., and Zhao, H. (2021, January 19–25). Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TX, USA.
|
|