Affiliation:
1. Visual Analysis and Perception Lab, Aalborg University, 9000 Aalborg, Denmark
Abstract
We propose a fully automatic annotation scheme that takes a raw 3D point cloud with a set of fitted CAD models as input and outputs convincing point-wise labels that can be used as cheap training data for point cloud segmentation. Compared with manual annotations, we show that our automatic labels are accurate while drastically reducing the annotation time and eliminating the need for manual intervention or dataset-specific parameters. Our labeling pipeline outputs semantic classes and soft point-wise object scores, which can either be binarized into standard one-hot-encoded labels, thresholded into weak labels with ambiguous points left unlabeled, or used directly as soft labels during training. We evaluate the label quality and segmentation performance of PointNet++ on a dataset of real industrial point clouds and Scan2CAD, a public dataset of indoor scenes. Our results indicate that reducing supervision in areas that are more difficult to label automatically is beneficial compared with the conventional approach of naively assigning a hard “best guess” label to every point.
Subject
General Earth and Planetary Sciences
Reference61 articles.
1. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review;Li;IEEE Trans. Neural Netw. Learn. Syst.,2021
2. Live Semantic 3D Perception for Immersive Augmented Reality;Han;IEEE Trans. Vis. Comput. Graph.,2020
3. Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation;Xie;IEEE Geosci. Remote Sens. Mag.,2020
4. Yang, X., Xia, D., Kin, T., and Igarashi, T. (2020, January 13–19). IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.
5. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018;Wang;Adv. Eng. Inform.,2019