Fast Context-Awareness Encoder for LiDAR Point Semantic Segmentation
-
Published:2023-07-26
Issue:15
Volume:12
Page:3228
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Du Tingyu12ORCID, Ni Jingxiu3, Wang Dongxing4ORCID
Affiliation:
1. School of Mechanical Electronic & Information Engineering, China University of Mining & Technology-Beijing, Beijing 100083, China 2. LU’AN Chemical Group, Changzhi 046204, China 3. Comprehensive Experimental Teaching Demonstration Center of Engineering, Beijing Union University, Beijing 100101, China 4. Department of Computer and Information Security Management, Fujian Police College, Fuzhou 350000, China
Abstract
A LiDAR sensor is a valuable tool for environmental perception as it can generate 3D point cloud data with reflectivity and position information by reflecting laser beams. However, it cannot provide the meaning of each point cloud cluster, so many studies focus on identifying semantic information about point clouds. This paper explores point cloud segmentation and presents a lightweight convolutional network called Fast Context-Awareness Encoder (FCAE), which can obtain semantic information about the point cloud cluster at different levels. The surrounding features of points are extracted as local features through the local context awareness network, then combined with global features, which are highly abstracted from the local features, to obtain more accurate semantic segmentation of the discrete points in space. The proposed algorithm has been compared and verified against other semantic KITTI data algorithms and has achieved state-of-the-art performance. Due to its ability to note fine-grained features on the z-axis in space, the algorithm shows higher prediction accuracy for certain types of objects. Moreover, the training and validation time is short, and the algorithm can meet high real-time requirements for 3D perception tasks.
Funder
National Key Research and Development Program of China
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference33 articles.
1. Farsoni, S., Rizzi, J., Ufondu, G.N., and Bonfe, M. (2022). Planning Collision-Free Robot Motions in a Human-Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking. Electronics, 11. 2. Lawin, F.J., Danelljan, M., Tosteberg, P., Bhat, G., Khan, F.S., and Felsberg, M. (2017). International Conference on Computer Analysis of Images and Patterns, Springer. 3. Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21–26). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 4. Qi Charles, R., Yi, L., Su, H., and Guibas, L.J. (2017, January 4–9). PointNet ++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA. 5. Zhou, Y., and Tuzel, O. (2018, January 18–23). VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|