Abstract
Acquiring high-fidelity 3D models from real-world scans is challenging. Existing shape completion-methods are incapable of generating details of objects or learning complex point distributions. To address this problem, we propose two transformer-based point-cloud-completion networks and a coarse-to-fine strategy to extract object shape features by way of self-attention (SA) and multi-resolution (MR), respectively. Specifically, in the first stage, the model extracts incomplete point-cloud features based on self-attention and multi-resolution encoders and predicts the missing partial with a set of parametric surface elements. Then, in the second stage, it merges the coarse-grained prediction with the input point cloud by iterative furthest point sampling (IFPS), to obtain a complete but coarse-grained point cloud. Finally, in the third stage, the complete but coarse point-cloud distribution is improved by a point-refiner network based on a point-cloud transformer (PCT). The results from comparison to state-of-the-art methods and ablation experiments on the ShapeNet-Part dataset both verified the effectiveness of our method.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering