Mood swings: expressive speech animation

Author:

Chuang Erika1,Bregler Christoph2

Affiliation:

1. Stanford University, Stanford, CA

2. New York University

Abstract

Motion capture-based facial animation has recently gained popularity in many applications, such as movies, video games, and human-computer interface designs. With the use of sophisticated facial motions from a human performer, animated characters are far more lively and convincing. However, editing motion data is difficult, limiting the potential of reusing the motion data for different tasks. To address this problem, statistical techniques have been applied to learn models of the facial motion in order to derive new motions based on the existing data. Most existing research focuses on audio-to-visual mapping and reordering of words, or on photo-realistically matching the synthesized face to the original performer. Little attention has been paid to modifying and controlling facial expression, or to mapping expressive motion onto other 3D characters.This article describes a method for creating expressive facial animation by extracting information from the expression axis of a speech performance. First, a statistical model for factoring the expression and visual speech is learned from video. This model can be used to analyze the facial expression of a new performance or modify the facial expressions of an existing performance. With the addition of this analysis of the facial expression, the facial motion can be more effectively retargeted to another 3D face model. The blendshape retargeting technique is extended to include subsets of morph targets that belong to different facial expression groups. The proportion of each subset included in a final animation is weighted according to the expression information. The resulting animation conveys much more emotion than if only the motion vectors were used for retargeting. Finally, since head motion is very important in adding liveness to facial animation, we introduces an audio-driven synthesis technique for generating new head motion.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference44 articles.

1. Leastsquares fitting of two 3-d point sets;Arun K. S.;IEEE Trans. Patt. Anal. Mach. Intell.,1987

2. Boersma P. and Weenink D. 2003. Praat: doing phonetics by computer. Available at http://www.praat.org. Boersma P. and Weenink D. 2003. Praat: doing phonetics by computer. Available at http://www.praat.org.

Cited by 71 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A Minimally Designed Audio-Animatronic Robot;IEEE Transactions on Robotics;2024

2. Defending Low-Bandwidth Talking Head Videoconferencing Systems From Real-Time Puppeteering Attacks;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW);2023-06

3. Automatic 3D Facial Landmark-Based Deformation Transfer on Facial Variants for Blendshape Generation;Arabian Journal for Science and Engineering;2022-12-02

4. S2M-Net: Speech Driven Three-party Conversational Motion Synthesis Networks;Proceedings of the 15th ACM SIGGRAPH Conference on Motion, Interaction and Games;2022-11-03

5. Does Smartphone Use Drive our Emotions or vice versa? A Causal Analysis;Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems;2020-04-21

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3