Deep Deformation Detail Synthesis for Thin Shell Models

Author:

Chen Lan12ORCID,Gao Lin13ORCID,Yang Jie13ORCID,Xu Shibiao4ORCID,Ye Juntao2ORCID,Zhang Xiaopeng2ORCID,Lai Yu‐Kun5ORCID

Affiliation:

1. University of Chinese Academy of Science China

2. Institute of Automation, Chinese Academy of Sciences China

3. Institute of Computing Technology, Chinese Academy of Sciences China

4. Beijing University of Posts and Telecommunications China

5. Cardiff University United Kingdom

Abstract

AbstractIn physics‐based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning. Data‐driven techniques make efforts to reduce the computation significantly by utilizing a preprocessed database. One type of methods relies on human poses to synthesize fitted garments, but these methods cannot be applied to general cloth animations. Another type of methods adds details to the coarse meshes obtained through simulation, which does not have such restrictions. However, existing works usually utilize coordinate‐based representations which cannot cope with large‐scale deformation, and requires dense vertex correspondences between coarse and fine meshes. Moreover, as such methods only add details, they require coarse meshes to be sufficiently close to fine meshes, which can be either impossible, or require unrealistic constraints to be applied when generating fine meshes. To address these challenges, we develop a temporally and spatially as‐consistent‐as‐possible deformation representation (named TS‐ACAP) and design a DeformTransformer network to learn the mapping from low‐resolution meshes to ones with fine details. This TS‐ACAP representation is designed to ensure both spatial and temporal consistency for sequential large‐scale deformations from cloth animations. With this TS‐ACAP representation, our DeformTransformer network first utilizes two mesh‐based encoders to extract the coarse and fine features using shared convolutional kernels, respectively. To transduct the coarse features to the fine ones, we leverage the spatial and temporal Transformer network that consists of vertex‐level and frame‐level attention mechanisms to ensure detail enhancement and temporal coherence of the prediction. Experimental results show that our method is able to produce reliable and realistic animations in various datasets at high frame rates with superior detail synthesis abilities compared to existing methods.

Funder

National Natural Science Foundation of China

Natural Science Foundation of Beijing Municipality

H2020 LEIT Information and Communication Technologies

Center for Africana Studies, Johns Hopkins University

Publisher

Wiley

Subject

Computer Graphics and Computer-Aided Design

Reference78 articles.

1. BahdanauD. ChoK. BengioY.: Neural machine translation by jointly learning to align and translate.arXiv preprint arXiv:1409.0473(2014). 3 5

2. BerticheH. MadadiM. EscaleraS.: PBNS: physically based neural simulator for unsupervised garment pose space deformation.arXiv preprint arXiv:2012.11310(2020). 2 3

3. BridsonR. MarinoS. FedkiwR.: Simulation of clothing with folds and wrinkles. InProc. Symp. Computer Animation(2003) pp.28–36. 2

4. Learning shape correspondence with anisotropic convolutional neural networks;Boscaini D.;Advances in Neural Information Processing Systems,2016

5. Learning long-term dependencies with gradient descent is difficult

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3