Affiliation:
1. State Key Lab for Novel Software Technology Nanjing University China
2. Huawei Cloud Computing Technologies Co., Ltd.
Abstract
AbstractReal‐time global illumination is a highly desirable yet challenging task in computer graphics. Existing works well solving this problem are mostly based on some kind of precomputed data (caches), while the final results depend significantly on the quality of the caches. In this paper, we propose a learning‐based pipeline that can reproduce a wide range of complex light transport phenomena, including high‐frequency glossy interreflection, at any viewpoint in real time (> 90 frames per‐second), using information from imperfect caches stored at the barycentre of every triangle in a 3D scene. These caches are generated at a precomputation stage by a physically‐based offline renderer at a low sampling rate (e.g., 32 samples per‐pixel) and a low image resolution (e.g., 64×16). At runtime, a deep radiance reconstruction method based on a dedicated neural network is then involved to reconstruct a high‐quality radiance map of full global illumination at any viewpoint from these imperfect caches, without introducing noise and aliasing artifacts. To further improve the reconstruction accuracy, a new feature fusion strategy is designed in the network to better exploit useful contents from cheap G‐buffers generated at runtime. The proposed framework ensures high‐quality rendering of images for moderate‐sized scenes with full global illumination effects, at the cost of reasonable precomputation time. We demonstrate the effectiveness and efficiency of the proposed pipeline by comparing it with alternative strategies, including real‐time path tracing and precomputed radiance transfer.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Jiangsu Province
Subject
Computer Graphics and Computer-Aided Design
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献