Language-Level Semantics-Conditioned 3D Point Cloud Segmentation
-
Published:2024-06-28
Issue:13
Volume:16
Page:2376
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Liu Bo1, Zeng Hui23ORCID, Dong Qiulei1ORCID, Hu Zhanyi1
Affiliation:
1. Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China 2. Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China 3. Shunde Innovation School, University of Science and Technology Beijing, Foshan 528399, China
Abstract
In this work, a language-level Semantics-Conditioned framework for 3D Point cloud segmentation, called SeCondPoint, is proposed, where language-level semantics are introduced to condition the modeling of the point feature distribution, as well as the pseudo-feature generation, and a feature–geometry-based Mixup approach is further proposed to facilitate the distribution learning. Since a large number of point features could be generated from the learned distribution thanks to the semantics-conditioned modeling, any existing segmentation network could be embedded into the proposed framework to boost its performance. In addition, the proposed framework has the inherent advantage of dealing with novel classes, which seems an impossible feat for the current segmentation networks. Extensive experimental results on two public datasets demonstrate that three typical segmentation networks could achieve significant improvements over their original performances after enhancement by the proposed framework in the conventional 3D segmentation task. Two benchmarks are also introduced for a newly introduced zero-shot 3D segmentation task, and the results also validate the proposed framework.
Funder
National Natural Science Foundation of China Scientific and Technological Innovation Foundation of Foshan
Reference61 articles.
1. Choy, C., Gwak, J., and Savarese, S. (2019, January 15–20). 4d spatio-temporal convnets: Minkowski convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA. 2. Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21–26). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 3. Yang, Y.-Q., Guo, Y.-X., Xiong, J.-Y., Liu, Y., Pan, H., Wang, P.-S., Tong, X., and Guo, B. (2023). Swin3d: A pretrained transformer backbone for 3d indoor scene understanding. arXiv. 4. Maturana, D., and Scherer, S. (October, January 28). Voxnet: A 3d convolutional neural network for real-time object recognition. Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany. 5. Lawin, F.J., Danelljan, M., Tosteberg, P., Bhat, G., Khan, F.S., and Felsberg, M. (2017, January 22–24). Deep projective 3d semantic segmentation. Proceedings of the Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden.
|
|