RAAFNet: Reverse Attention Adaptive Fusion Network for Large-Scale Point Cloud Semantic Segmentation
-
Published:2024-08-12
Issue:16
Volume:12
Page:2485
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Wang Kai1, Zhang Huanhuan12ORCID
Affiliation:
1. School of Electronics and Information, Xi’an Polytechnic University, Xi’an 710048, China 2. School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
Abstract
Point cloud semantic segmentation is essential for comprehending and analyzing scenes. However, performing semantic segmentation on large-scale point clouds presents challenges, including demanding high memory requirements, a lack of structured data, and the absence of topological information. This paper presents a novel method based on the Reverse Attention Adaptive Fusion network (RAAFNet) for segmenting large-scale point clouds. RAAFNet consists of a reverse attention encoder–decoder module, an adaptive fusion module, and a local feature aggregation module. The reverse attention encoder–decoder module is applied to extract point cloud features at different scales. The adaptive fusion module enhances fine-grained representation within multi-resolution feature maps. Furthermore, a local aggregation classifier is introduced, which aggregates the features of neighboring points to the center point in order to leverage contextual information and enhance the classifier’s perceptual capability. Finally, the predicted labels are generated. Notably, our method excels at extracting point cloud features across different dimensions and produces highly accurate segmentation results. Experimental results on the Semantic3D dataset achieved an overall accuracy of 89.9% and a mIoU of 74.4%.
Funder
Key Research and Development Program of Shaanxi Province under Grant Xi’an Beilin District Science and Technology Plan Project under Grant Preferential Funding for Post Doctoral Research Program in ZheJiang Province under Grant Science and Technology Foundation of Xi’an for Program of University Science and Technology Scholar Serving Enterprise under Grant Innovation Capability Support Program of Shaanxi under Grant
Reference44 articles.
1. Saez-Perez, J., Wang, Q., Calero, J.M.A., and Garcia-Rodriguez, J. (2024, January 11–15). Enhancing point cloud resolution for autonomous driving with deep learning AI models. Proceedings of the 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and Other Affiliated Events (PerCom Workshops), Biarritz, France. 2. Chen, X., Ma, H., Wan, J., Li, B., and Xia, T. (2017, January 21–26). Multi-view 3d object detection network for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 3. Lawin, F.J., Danelljan, M., Tosteberg, P., Bhat, G., Khan, F.S., and Felsberg, M. (2017). Deep projective 3D semantic segmentation. Computer Analysis of Images and Patterns, Proceedings of the 17th International Conference, CAIP 2017, Ystad, Sweden, 22–24 August 2017, Springer. Proceedings, Part I 17. 4. Wu, B., Wan, A., Yue, X., and Keutzer, K. (2018, January 21–25). Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3D lidar point cloud. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia. 5. Wu, B., Zhou, X., Zhao, S., Yue, X., and Keutzer, K. (2019, January 20–24). Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.
|
|