MVP-Stereo: A Parallel Multi-View Patchmatch Stereo Method with Dilation Matching for Photogrammetric Application
-
Published:2024-03-09
Issue:6
Volume:16
Page:964
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Yan Qingsong1ORCID, Kang Junhua2, Xiao Teng34ORCID, Liu Haibing1ORCID, Deng Fei14
Affiliation:
1. School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China 2. School of Geological Engineering and Geomatics, Chang’an University, Xi’an 710064, China 3. School of Computer Science, Hubei University of Technology, Wuhan 430068, China 4. Wuhan Tianjihang Information Technology Co., Ltd., Wuhan 430010, China
Abstract
Multi-view stereo plays an important role in 3D reconstruction but suffers from low reconstruction efficiency and has difficulties reconstructing areas with low or repeated textures. To address this, we propose MVP-Stereo, a novel multi-view parallel patchmatch stereo method. MVP-Stereo employs two key techniques. First, MVP-Stereo utilizes multi-view dilated ZNCC to handle low texture and repeated texture by dynamically adjusting the matching window size based on image variance and using a portion of pixels to calculate matching costs without increasing computational complexity. Second, MVP-Stereo leverages multi-scale parallel patchmatch to reconstruct the depth map for each image in a highly efficient manner, which is implemented by CUDA with random initialization, multi-scale parallel spatial propagation, random refinement, and the coarse-to-fine strategy. Experiments on the Strecha dataset, the ETH3D benchmark, and the UAV dataset demonstrate that MVP-Stereo can achieve competitive reconstruction quality compared to state-of-the-art methods with the highest reconstruction efficiency. For example, MVP-Stereo outperforms COLMAP in reconstruction quality by around 30% of reconstruction time, and achieves around 90% of the quality of ACMMP and SD-MVS in only around 20% of the time. In summary, MVP-Stereo can efficiently reconstruct high-quality point clouds and meet the requirements of several photogrammetric applications, such as emergency relief, infrastructure inspection, and environmental monitoring.
Funder
National Natural Science Foundation of China Hubei Key Research and Development Project Postdoctoral Fellowship Program of CPSF Natural Science Basic Research Program of Shaanxi
Reference67 articles.
1. Multi-image matching for DSM generation from IKONOS imagery;Zhang;ISPRS J. Photogramm. Remote Sens.,2006 2. Gomez, C., Setiawan, M.A., Listyaningrum, N., Wibowo, S.B., Hadmoko, D.S., Suryanto, W., Darmawan, H., Bradak, B., Daikai, R., and Sunardi, S. (2022). LiDAR and UAV SfM-MVS of Merapi volcanic dome and crater rim change from 2012 to 2014. Remote Sens., 14. 3. Corradetti, A., Seers, T., Mercuri, M., Calligaris, C., Busetti, A., and Zini, L. (2022). Benchmarking different SfM-MVS photogrammetric and iOS LiDAR acquisition methods for the digital preservation of a short-lived excavation: A case study from an area of sinkhole related subsidence. Remote Sens., 14. 4. Nan, L., and Wonka, P. (2017, January 22–29). Polyfit: Polygonal surface reconstruction from point clouds. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy. 5. Han, W., Xiang, S., Liu, C., Wang, R., and Feng, C. (2020, January 13–19). Spare3d: A dataset for spatial reasoning on three-view line drawings. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.
|
|