Affiliation:
1. State Key Laboratory of CAD&CG, Zhejiang University, China
2. City University of Hong Kong, China
3. The Chinese University of Hong Kong, Shenzhen and Point Spread Technology, China
Abstract
Immersive user experiences in live VR/AR performances require a fast and accurate free-view rendering of the performers. Existing methods are mainly based on Pixel-aligned Implicit Functions (PIFu) or Neural Radiance Fields (NeRF). However, while PIFu-based methods usually fail to produce photorealistic view-dependent textures, NeRF-based methods typically lack local geometry accuracy and are computationally heavy (
e.g.
, dense sampling of 3D points, additional fine-tuning, or pose estimation). In this work, we propose a novel generalizable method, named SAILOR, to create high-quality human free-view videos from very sparse RGBD live streams. To produce view-dependent textures while preserving locally accurate geometry, we integrate PIFu and NeRF such that they work synergistically by conditioning the PIFu on depth and then rendering view-dependent textures through NeRF. Specifically, we propose a novel network, named SRONet, for this hybrid representation. SRONet can handle unseen performers without fine-tuning. Besides, a neural blending-based ray interpolation approach, a tree-based voxel-denoising scheme, and a parallel computing pipeline are incorporated to reconstruct and render live free-view videos at 10 fps on average. To evaluate the rendering performance, we construct a real-captured RGBD benchmark from 40 performers. Experimental results show that SAILOR outperforms existing human reconstruction and performance capture methods.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference100 articles.
1. Tex2Shape: Detailed Full Human Body Geometry From a Single Image
2. Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction
3. Hydra Attention: Efficient Attention with Many Heads
4. Yukang Cao , Guanying Chen , Kai Han , Wenqi Yang , and Kwan-Yee K Wong . 2022 . JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog. Yukang Cao, Guanying Chen, Kai Han, Wenqi Yang, and Kwan-Yee K Wong. 2022. JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog.
5. Kennard Chan Guosheng Lin Haiyu Zhao and Weisi Lin. 2022a. S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction. In Adv. Neural Inform. Process. Syst. Kennard Chan Guosheng Lin Haiyu Zhao and Weisi Lin. 2022a. S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction. In Adv. Neural Inform. Process. Syst.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Rip-NeRF: Anti-aliasing Radiance Fields with Ripmap-Encoded Platonic Solids;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13