Abstract
We present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. We use a driving video (of a different subject) and develop means to transfer the expressiveness of the subject in the driving video to the target portrait. In contrast to previous work that requires an input video of the target face to reenact a facial performance, our technique uses only a
single
target image. We animate the target image through 2D warps that imitate the facial transformations in the driving video. As warps alone do not carry the full expressiveness of the face, we add fine-scale dynamic details which are commonly associated with facial expressions such as creases and wrinkles. Furthermore, we hallucinate regions that are hidden in the input target face, most notably in the inner mouth. Our technique gives rise to
reactive profiles
, where people in still images can automatically interact with their viewers. We demonstrate our technique operating on numerous still portraits from the internet.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference44 articles.
1. Automatic Cinemagraph Portraits
2. A morphable model for the synthesis of 3D faces
3. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm;Bouguet Jean-Yves;Intel Corporation,2001
4. Automatic 3D face reconstruction from single images or video
Cited by
122 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献