Local anatomically-constrained facial performance retargeting

Author:

Chandran Prashanth1ORCID,Ciccone Loïc2ORCID,Gross Markus1ORCID,Bradley Derek2ORCID

Affiliation:

1. ETH Zurich, Switzerland and DisneyResearch|Studios, Switzerland

2. DisneyResearch|Studios, Switzerland

Abstract

Generating realistic facial animation for CG characters and digital doubles is one of the hardest tasks in animation. A typical production workflow involves capturing the performance of a real actor using mo-cap technology, and transferring the captured motion to the target digital character. This process, known as retargeting , has been used for over a decade, and typically relies on either large blendshape rigs that are expensive to create, or direct deformation transfer algorithms that operate on individual geometric elements and are prone to artifacts. We present a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone. Our two step method first transfers local expression details to the target, and is followed by a global face surface prediction that uses anatomical constraints in order to stay in the feasible shape space of the target character. Our method also offers artists with familiar blendshape controls to perform fine adjustments to the retargeted animation. As such, our method is ideally suited for the complex task of human-to-human 3D facial performance retargeting, where the quality bar is extremely high in order to avoid the uncanny valley, while also being applicable for more common human-to-creature settings. We demonstrate the superior performance of our method over traditional deformation transfer algorithms, while achieving a quality comparable to current blendshape-based techniques used in production while requiring significantly fewer input shapes at setup time. A detailed user study corroborates the realistic and artifact free animations generated by our method in comparison to existing techniques.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference63 articles.

1. Skeleton-aware networks for deep motion retargeting

2. Sameer Agarwal Keir Mierle and Others. 2010. Ceres Solver. http://ceres-solver.org. Sameer Agarwal Keir Mierle and Others. 2010. Ceres Solver. http://ceres-solver.org.

3. Learning to Generate 3D Stylized Character Expressions from Humans

4. Dafni Antotsiou , Guillermo Garcia-Hernando , and Tae-Kyun Kim . 2018 . Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation. In ECCV Hands Workshop. Dafni Antotsiou, Guillermo Garcia-Hernando, and Tae-Kyun Kim. 2018. Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation. In ECCV Hands Workshop.

5. Semantic deformation transfer

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3