LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene

Author:

Ren Haocheng1ORCID,Huo Yuchi12ORCID,Peng Yifan3ORCID,Sheng Hongtao1ORCID,Xue Weidong1ORCID,Huang Hongxiang1ORCID,Lan Jingzhen1ORCID,Wang Rui1ORCID,Bao Hujun4ORCID

Affiliation:

1. State Key Laboratory of CAD&CG, State Key Laboratory of CAD&CG, Zhejiang Univerisity, Hangzhou, China

2. Zhejiang Lab, Hangzhou, China

3. University of Hong Kong, Hongkong, China

4. State Key Laboratory of CAD&CG, State Key Laboratory of CAD&CG, Zhejiang University, Hangzhou, China

Abstract

The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed LightFormer , that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.

Funder

National Key R&D Program of China

National Key RD Program of China

NSFC

Key R&D Program of Zhejiang Province

Publisher

Association for Computing Machinery (ACM)

Reference70 articles.

1. Attila T. Áfra. 2024. Intel® Open Image Denoise. https://www.openimagedenoise.org.

2. Pontus Andersson Jim Nilsson Peter Shirley and Tomas Akenine-Möller. 2021. Visualizing errors in rendered high dynamic range images. (2021).

3. Kernel-predicting convolutional networks for denoising Monte Carlo renderings

4. Martin Balint, Krzysztof Wolski, Karol Myszkowski, Hans-Peter Seidel, and Rafał Mantiuk. 2023. Neural Partitioning Pyramids for Denoising Monte Carlo Renderings. In ACM SIGGRAPH 2023 Conference Proceedings. 1--11.

5. A precomputed polynomial representation for interactive BRDF editing with global illumination

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3