Learning active quasistatic physics-based models from data

Author:

Srinivasan Sangeetha Grama1,Wang Qisi1,Rojas Junior2,Klár Gergely3,Kavan Ladislav2,Sifakis Eftychios4

Affiliation:

1. University of Wisconsin-Madison

2. University of Utah

3. Weta Digital

4. University of Wisconsin-Madison and Weta Digital

Abstract

Humans and animals can control their bodies to generate a wide range of motions via low-dimensional action signals representing high-level goals. As such, human bodies and faces are prime examples of active objects, which can affect their shape via an internal actuation mechanism. This paper explores the following proposition: given a training set of example poses of an active deformable object, can we learn a low-dimensional control space that could reproduce the training set and generalize to new poses? In contrast to popular machine learning methods for dimensionality reduction such as auto-encoders, we model our active objects in a physics-based way. We utilize a differentiable, quasistatic, physics-based simulation layer and combine it with a decoder-type neural network. Our differentiable physics layer naturally fits into deep learning frameworks and allows the decoder network to learn actuations that reach the desired poses after physics-based simulation. In contrast to modeling approaches where users build anatomical models from first principles, medical literature or medical imaging, we do not presume knowledge of the underlying musculature, but learn the structure and control of the actuation mechanism directly from the input data. We present a training paradigm and several scalability-oriented enhancements that allow us to train effectively while accommodating high-resolution volumetric models, with as many as a quarter million simulation elements. The prime demonstration of the efficacy of our example-driven modeling framework targets facial animation, where we train on a collection of input expressions while generalizing to unseen poses, drive detailed facial animation from sparse motion capture input, and facilitate expression sculpting via direct manipulation.

Funder

NSF

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Cited by 11 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. An Implicit Physical Face Model Driven by Expression and Style;SIGGRAPH Asia 2023 Conference Papers;2023-12-10

2. MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters;SIGGRAPH Asia 2023 Conference Papers;2023-12-10

3. SoftDECA: Computationally Efficient Physics-Based Facial Animations;ACM SIGGRAPH Conference on Motion, Interaction and Games;2023-11-15

4. A Generalized Constitutive Model for Versatile MPM Simulation and Inverse Learning with Differentiable Physics;Proceedings of the ACM on Computer Graphics and Interactive Techniques;2023-08-16

5. Data-Free Learning of Reduced-Order Kinematics;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings;2023-07-23

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3