Abstract
3D LiDAR has become an indispensable sensor in autonomous driving vehicles. In LiDAR-based 3D point cloud semantic segmentation, most voxel-based 3D segmentors cannot efficiently capture large amounts of context information, resulting in limited receptive fields and limiting their performance. To address this problem, a sparse voxel-based attention network is introduced for 3D LiDAR point cloud semantic segmentation, termed SVASeg, which captures large amounts of context information between voxels through sparse voxel-based multi-head attention (SMHA). The traditional multi-head attention cannot directly be applied to the non-empty sparse voxels. To this end, a hash table is built according to the incrementation of voxel coordinates to lookup the non-empty neighboring voxels of each sparse voxel. Then, the sparse voxels are grouped into different groups, and each group corresponds to a local region. Afterwards, position embedding, multi-head attention and feature fusion are performed for each group to capture and aggregate the context information. Based on the SMHA module, the SVASeg can directly operate on the non-empty voxels, maintaining a comparable computational overhead to the convolutional method. Extensive experimental results on the SemanticKITTI and nuScenes datasets show the superiority of SVASeg.
Funder
National Natural Science Foundation of China
Subject
General Earth and Planetary Sciences
Reference55 articles.
1. Randla-net: Efficient semantic segmentation of large-scale point clouds;Hu;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2020
2. Semantic Segmentation of 3D Point Cloud Based on Spatial Eight-Quadrant Kernel Convolution
3. Construction of a Semantic Segmentation Network for the Overhead Catenary System Point Cloud Based on Multi-Scale Feature Fusion
4. JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds
5. KPConv: Flexible and Deformable Convolution for Point Clouds;Thomas;Proceedings of the IEEE International Conference on Computer Vision,2019
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献