SAILOR: Synergizing Radiance and Occupancy Fields for Live Human Performance Capture

Author:

Dong Zheng1,Xu Ke2,Gao Yaoan1,Sun Qilin3,Bao Hujun1,Xu Weiwei1,Lau Rynson W. H.2

Affiliation:

1. State Key Laboratory of CAD&CG, Zhejiang University, China

2. City University of Hong Kong, China

3. The Chinese University of Hong Kong, Shenzhen and Point Spread Technology, China

Abstract

Immersive user experiences in live VR/AR performances require a fast and accurate free-view rendering of the performers. Existing methods are mainly based on Pixel-aligned Implicit Functions (PIFu) or Neural Radiance Fields (NeRF). However, while PIFu-based methods usually fail to produce photorealistic view-dependent textures, NeRF-based methods typically lack local geometry accuracy and are computationally heavy ( e.g. , dense sampling of 3D points, additional fine-tuning, or pose estimation). In this work, we propose a novel generalizable method, named SAILOR, to create high-quality human free-view videos from very sparse RGBD live streams. To produce view-dependent textures while preserving locally accurate geometry, we integrate PIFu and NeRF such that they work synergistically by conditioning the PIFu on depth and then rendering view-dependent textures through NeRF. Specifically, we propose a novel network, named SRONet, for this hybrid representation. SRONet can handle unseen performers without fine-tuning. Besides, a neural blending-based ray interpolation approach, a tree-based voxel-denoising scheme, and a parallel computing pipeline are incorporated to reconstruct and render live free-view videos at 10 fps on average. To evaluate the rendering performance, we construct a real-captured RGBD benchmark from 40 performers. Experimental results show that SAILOR outperforms existing human reconstruction and performance capture methods.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference100 articles.

1. Tex2Shape: Detailed Full Human Body Geometry From a Single Image

2. Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction

3. Hydra Attention: Efficient Attention with Many Heads

4. Yukang Cao , Guanying Chen , Kai Han , Wenqi Yang , and Kwan-Yee K Wong . 2022 . JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog. Yukang Cao, Guanying Chen, Kai Han, Wenqi Yang, and Kwan-Yee K Wong. 2022. JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog.

5. Kennard Chan Guosheng Lin Haiyu Zhao and Weisi Lin. 2022a. S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction. In Adv. Neural Inform. Process. Syst. Kennard Chan Guosheng Lin Haiyu Zhao and Weisi Lin. 2022a. S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction. In Adv. Neural Inform. Process. Syst.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Rip-NeRF: Anti-aliasing Radiance Fields with Ripmap-Encoded Platonic Solids;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3