Implicit 3D Human Reconstruction Guided by Parametric Models and Normal Maps
-
Published:2024-05-29
Issue:6
Volume:10
Page:133
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
Ren Yong12, Zhou Mingquan12, Wang Yifan12, Feng Long12, Zhu Qiuquan12, Li Kang12ORCID, Geng Guohua12
Affiliation:
1. School of Information Science and Technology, Northwest University, Xi’an 710127, China 2. National and Local Joint Engineering Research Center for Cultural Heritage Digitization, Xi’an 710127, China
Abstract
Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They typically employ parameterized human models to constrain the reconstruction process, ensuring the results do not deviate too far from the model and produce anomalies. However, this also limits the recovery of loose clothing. To address this issue, we propose an end-to-end method called IHRPN for reconstructing clothed humans from a single 2D human image. This method includes a feature extraction module for semantic extraction of image features. We propose an image semantic feature extraction aimed at achieving pixel model space consistency and enhancing the robustness of loose clothing. We extract features from the input image to infer and recover the SMPL-X mesh, and then combine it with a normal map to guide the implicit function to reconstruct the complete clothed human. Unlike traditional methods, we use local features for implicit surface regression. Our experimental results show that our IHRPN method performs excellently on the CAPE and AGORA datasets, achieving good performance, and the reconstruction of loose clothing is noticeably more accurate and robust.
Funder
National Natural Science Foundation of China Key Laboratory Project of the Ministry of Culture and Tourism Science and Technology Plan Project of Xi’an City Key Research and Development Program of Shaanxi Province China Postdoctoral Science Foundation
Reference36 articles.
1. Feng, Y., Choutas, V., Bolkart, T., Tzionas, D., and Black, M.J. (2021, January 1–3). Collaborative regression of expressive bodies using moderation. Proceedings of the 2021 International Conference on 3D Vision (3DV)(IEEE2021), London, UK. 2. Thomas, H., Qi, C.R., Deschaud, J.-E., Marcotegui, B., Goulette, F., and Guibas, L.J. (November, January 27). Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE/CVF International Conference on Computer Vision (2019), Seoul, Republic of Korea. 3. Varol, G., Ceylan, D., Russell, B., Yang, J., Yumer, E., Laptev, I., and Schmid, C. (2018, January 8–14). Bodynet: Volumetric inference of 3d human body shapes. Proceedings of the European Conference on Computer Vision (ECCV) (2018), Munich, Germany. 4. Wang, W., Ceylan, D., Mech, R., and Neumann, U. (2019, January 15–20). 3dn: 3d deformation network. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), Long Beach, CA, USA. 5. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., and Li, H. (November, January 27). Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. Proceedings of the IEEE/CVF International Conference on Computer Vision (2019), Seoul, Republic of Korea.
|
|