Affiliation:
1. Faculty of Information Engineering and Automation Kunming University of Science and Technology Kunming China
2. Computer Technology Application Key Laboratory of Yunnan Province Kunming China
3. The School of Computer Science and Engineering Sun Yat‐sen University Guangzhou China
Abstract
AbstractDetailed and accurate feature representation is essential for high‐resolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multi‐layer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with high‐resolution details (style, wrinkles and clothed layers), and has good potential in three‐dimensional virtual try‐on and digital characters.
Funder
National Natural Science Foundation of China
Subject
Computer Graphics and Computer-Aided Design