Affiliation:
1. Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, Changchun 130033, China
2. University of Chinese Academy of Sciences, Beijing 101408, China
Abstract
Point cloud registration is a critical problem because it is the basis of many 3D vision tasks. With the popularity of deep learning, many scholars have focused on leveraging deep neural networks to address the point cloud registration problem. However, many of these methods are still sensitive to partial overlap and differences in density distribution. For this reason, herein, we propose a method based on rotation-invariant features and using a sparse-to-dense matching strategy for robust point cloud registration. Firstly, we encode raw points as superpoints with a network combining KPConv and FPN, and their associated features are extracted. Then point pair features of these superpoints are computed and embedded into the transformer to learn the hybrid features, which makes the approach invariant to rigid transformation. Subsequently, a sparse-to-dense matching strategy is designed to address the registration problem. The correspondences of superpoints are obtained via sparse matching and then propagated to local dense points and, further, to global dense points, the byproduct of which is a series of transformation parameters. Finally, the enhanced features based on spatial consistency are repeatedly fed into the sparse-to-dense matching module to rebuild reliable correspondence, and the optimal transformation parameter is re-estimated for final alignment. Our experiments show that, with the proposed method, the inlier ratio and registration recall are effectively improved, and the performance is better than that of other point cloud registration methods on 3DMatch and ModelNet40.
Funder
Department of Science and Technology of Jilin Province
Reference58 articles.
1. Eldefrawy, M., King, S.A., and Starek, M. (2022). Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking. Remote Sens., 14.
2. Incremental-Segment-Based Localization in 3-D Point Clouds;Gollub;IEEE Robot. Autom. Lett.,2018
3. Point Signatures: A New Representation for 3D Object Recognition;Chua;Int. J. Comput. Vis.,1997
4. Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21–26). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA.
5. Ali, S.A., Kahraman, K., and Reis, G. (2021). RPSRNet: End-to-End Trainable Rigid Point Set Registration Network using Barnes-Hut 2D-Tree Representation. arXiv.