Affiliation:
1. Stanford University, Stanford, CA
2. New York University
Abstract
Motion capture-based facial animation has recently gained popularity in many applications, such as movies, video games, and human-computer interface designs. With the use of sophisticated facial motions from a human performer, animated characters are far more lively and convincing. However, editing motion data is difficult, limiting the potential of reusing the motion data for different tasks. To address this problem, statistical techniques have been applied to learn models of the facial motion in order to derive new motions based on the existing data. Most existing research focuses on audio-to-visual mapping and reordering of words, or on photo-realistically matching the synthesized face to the original performer. Little attention has been paid to modifying and controlling facial expression, or to mapping expressive motion onto other 3D characters.This article describes a method for creating expressive facial animation by extracting information from the expression axis of a speech performance. First, a statistical model for factoring the expression and visual speech is learned from video. This model can be used to analyze the facial expression of a new performance or modify the facial expressions of an existing performance. With the addition of this analysis of the facial expression, the facial motion can be more effectively retargeted to another 3D face model. The blendshape retargeting technique is extended to include subsets of morph targets that belong to different facial expression groups. The proportion of each subset included in a final animation is weighted according to the expression information. The resulting animation conveys much more emotion than if only the motion vectors were used for retargeting. Finally, since head motion is very important in adding liveness to facial animation, we introduces an audio-driven synthesis technique for generating new head motion.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference44 articles.
1. Leastsquares fitting of two 3-d point sets;Arun K. S.;IEEE Trans. Patt. Anal. Mach. Intell.,1987
2. Boersma P. and Weenink D. 2003. Praat: doing phonetics by computer. Available at http://www.praat.org. Boersma P. and Weenink D. 2003. Praat: doing phonetics by computer. Available at http://www.praat.org.
Cited by
71 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献