A Light Multi-View Stereo Method with Patch-Uncertainty Awareness
Author:
Liu Zhen1, Wu Guangzheng1, Xie Tao1, Li Shilong1, Wu Chao1, Zhang Zhiming2, Zhou Jiali1
Affiliation:
1. College of Science, Zhejiang University of Technology, Hangzhou 310023, China 2. Rept Battero, Wenzhou 325058, China
Abstract
Multi-view stereo methods utilize image sequences from different views to generate a 3D point cloud model of the scene. However, existing approaches often overlook coarse-stage features, impacting the final reconstruction accuracy. Moreover, using a fixed range for all the pixels during inverse depth sampling can adversely affect depth estimation. To address these challenges, we present a novel learning-based multi-view stereo method incorporating attention mechanisms and an adaptive depth sampling strategy. Firstly, we propose a lightweight, coarse-feature-enhanced feature pyramid network in the feature extraction stage, augmented by a coarse-feature-enhanced module. This module integrates features with channel and spatial attention, enriching the contextual features that are crucial for the initial depth estimation. Secondly, we introduce a novel patch-uncertainty-based depth sampling strategy for depth refinement, dynamically configuring depth sampling ranges within the GRU-based optimization process. Furthermore, we incorporate an edge detection operator to extract edge features from the reference image’s feature map. These edge features are additionally integrated into the iterative cost volume construction, enhancing the reconstruction accuracy. Lastly, our method is rigorously evaluated on the DTU and Tanks and Temples benchmark datasets, revealing its low GPU memory consumption and competitive reconstruction quality compared to other learning-based MVS methods.
Funder
“Pioneer” and “Leading Goose” R&D Program of Zhejiang Province
Reference48 articles.
1. Schonberger, J.L., and Frahm, J.M. (2016, January 27–30). Structure-from-motion revisited. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 2. Bleyer, M., Rhemann, C., and Rother, C. (September, January 29). Patchmatch stereo-stereo matching with slanted support windows. Proceedings of the BMVC, Dundee, UK. 3. Sun, J., Xie, Y., Chen, L., Zhou, X., and Bao, H. (2021, January 20–25). NeuralRecon: Real-time coherent 3D reconstruction from monocular video. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA. 4. Yao, Y., Luo, Z., Li, S., Fang, T., and Quan, L. (2018). Computer Vision—ECCV 2018, Proceedings of the 15th European Conference, Munich, Germany, 8–14 September 2018, Springer. 5. Wang, F., Galliani, S., Vogel, C., Speciale, P., and Pollefeys, M. (2021, January 20–25). Patchmatchnet: Learned multi-view patchmatch stereo. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.
|
|