Affiliation:
1. State Key Laboratory of CAD&CG, State Key Laboratory of CAD&CG, Zhejiang Univerisity, Hangzhou, China
2. Zhejiang Lab, Hangzhou, China
3. University of Hong Kong, Hongkong, China
4. State Key Laboratory of CAD&CG, State Key Laboratory of CAD&CG, Zhejiang University, Hangzhou, China
Abstract
The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed
LightFormer
, that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.
Funder
National Key R&D Program of China
National Key RD Program of China
NSFC
Key R&D Program of Zhejiang Province
Publisher
Association for Computing Machinery (ACM)
Reference70 articles.
1. Attila T. Áfra. 2024. Intel® Open Image Denoise. https://www.openimagedenoise.org.
2. Pontus Andersson Jim Nilsson Peter Shirley and Tomas Akenine-Möller. 2021. Visualizing errors in rendered high dynamic range images. (2021).
3. Kernel-predicting convolutional networks for denoising Monte Carlo renderings
4. Martin Balint, Krzysztof Wolski, Karol Myszkowski, Hans-Peter Seidel, and Rafał Mantiuk. 2023. Neural Partitioning Pyramids for Denoising Monte Carlo Renderings. In ACM SIGGRAPH 2023 Conference Proceedings. 1--11.
5. A precomputed polynomial representation for interactive BRDF editing with global illumination