Affiliation:
1. Gwangju Institute of Science and Technology, Gwangju, South Korea
2. Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
Abstract
Physically based differentiable rendering allows an accurate light transport simulation to be differentiated with respect to the rendering input, i.e., scene parameters, and it enables inferring scene parameters from target images, e.g., photos or synthetic images, via an iterative optimization. However, this inverse Monte Carlo rendering inherits the fundamental problem of the Monte Carlo integration, i.e., noise, resulting in a slow optimization convergence. An appealing approach to addressing such noise is exploiting an image denoiser to improve optimization convergence. Unfortunately, the direct adoption of existing image denoisers designed for ordinary rendering scenarios can drive the optimization into undesirable local minima due to denoising bias. It motivates us to reformulate a new image denoiser specialized for inverse rendering. Unlike existing image denoisers, we conduct our denoising by considering the target images, i.e., specific information in inverse rendering. For our target-aware denoising, we determine our denoising weights via a linear regression technique using the target. We demonstrate that our denoiser enables inverse rendering optimization to infer scene parameters robustly through a diverse set of tests.
Publisher
Association for Computing Machinery (ACM)
Reference35 articles.
1. Attila T. Áfra. 2023. Intel® Open Image Denoise. https://www.openimagedenoise.org.
2. Kernel-predicting convolutional networks for denoising Monte Carlo renderings
3. Martin Balint, Karol Myszkowski, Hans-Peter Seidel, and Gurprit Singh. 2023. Joint Sampling and Optimisation for Inverse Rendering. In SIGGRAPH Asia 2023 Conference Papers (Sydney, NSW, Australia) (SA '23). Article 29, 10 pages.
4. Unbiased warped-area sampling for differentiable rendering
5. Benedikt Bitterli. 2016. Rendering resources. https://benedikt-bitterli.me/resources/.