Affiliation:
1. Nanjing University, State Key Lab for Novel Software Technology China
Abstract
AbstractThe problem of reconstructing spatially‐varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a two‐branch network to learn the lighting effects in images. The two branches, referred to as Light‐known and Light‐aware, diverge in their need for light information. The Light‐aware branch is guided by the Light‐known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real‐world cases without calibration, only the Light‐aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real‐world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.
Subject
Computer Graphics and Computer-Aided Design