Affiliation:
1. University of Southern California
2. Columbia University
3. Artec Group
4. Adobe Research
5. UC Berkeley
Abstract
We develop an automatic pipeline that allows ordinary users to capture complete and fully textured 3D models of themselves in minutes, using only a single Kinect sensor, in the uncontrolled lighting environment of their own home. Our method requires neither a turntable nor a second operator, and is robust to the small deformations and changes of pose that inevitably arise during scanning. After the users rotate themselves with the same pose for a few scans from different views, our system stitches together the captured scans using multi-view non-rigid registration, and produces watertight final models. To ensure consistent texturing, we recover the underlying albedo from each scanned texture and generate seamless global textures using Poisson blending. Despite the minimal requirements we place on the hardware and users, our method is suitable for full body capture of challenging scenes that cannot be handled well using previous methods, such as those involving loose clothing, complex poses, and props.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Cited by
155 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Audio-Driven Lips and Expression on 3D Human Face;Advances in Computer Graphics;2023-12-29
2. Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26
3. Kairos: Exploring a Virtual Botanical Garden through Point Clouds;Electronics;2023-10-11
4. Model-Driven Compression for Digital Human Using Multi-Granularity Representations;2023 IEEE International Conference on Multimedia and Expo (ICME);2023-07
5. Research on 3D Driveable Digital Human Generation System;2023 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB);2023-06-14