Abstract
AbstractNeural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and performs sampling only near the estimated surface. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and three different state-of-the-art NeRF methods. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.
Funder
Ministry of Science and ICT, South Korea
BK21 FOUR Project
Institute for Basic Science
Ministry of Trade, Industry and Energy
SKKU-SMC and SKKU-KBSMC Future Convergence Research Program
SKKU seed
Publisher
Springer Science and Business Media LLC
Reference29 articles.
1. Alan W (1993) 3D Computer graphics. Addison-Wesley, Boston
2. Boss M, Braun R, Jampani V, et al (2021) NeRD: neural reflectance decomposition from image collections. In: IEEE/CVF international conference on computer vision, pp 12664–12674. https://doi.org/10.1109/ICCV48922.2021.01245
3. Cernea D (2020) OpenMVS: multi-view stereo reconstruction library. https://cdcseacave.github.io/openMVS
4. Chen SE, Williams L (1993) View interpolation for image synthesis. In: Proceedings of the conference on computer graphics and interactive techniques, pp 279–288. https://doi.org/10.1145/166117.166153
5. Deng K, Liu A, Zhu JY, et al (2022) Depth-supervised NeRF: fewer views and faster training for free. In: IEEE/CVF conference on computer vision and pattern recognition, pp 12872–12881. https://doi.org/10.1109/CVPR52688.2022.01254