Talking Face Generation via Facial Anatomy

Author:

Liu Shiguang1ORCID,Wang Huixin1ORCID

Affiliation:

1. Tianjin University, Tianjin, China

Abstract

To generate the corresponding talking face from a speech audio and a face image, it is essential to match the variations in the facial appearance with the speech audio in subtle movements of different face regions. Nevertheless, the facial movements generated by the existing methods lack detail and vividness, or the methods are only oriented toward a specific person. In this article, we propose a novel two-stage network to generate talking faces for any target identity through annotations of the action units (AUs). In the first stage, the relationship between the audio and the AUs in the audio-to-AU network is learned. The audio-to-AU network needs to produce the consistent AU group for the input audio. In the second stage, the AU group in the first stage and a face image are fed into the generation network to output the resulting talking face image. Various results confirm that, compared to state-of-the-art methods, our approach is able to produce more realistic and vivid talking faces for arbitrary targets with richer details of facial movements, such as the cheek motion and eyebrow motion.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture

Reference50 articles.

1. Lele Chen, Guofeng Cui, Celong Liu, Zhong Li, Ziyi Kou, Yi Xu, and Chenliang Xu. 2020. Talking-head generation with rhythmic head motion. In Proceedings of the European Conference on Computer Vision. 35–51.

2. Lele Chen, Zhiheng Li, Ross K. Maddox, Zhiyao Duan, and Chenliang Xu. 2018. Lip movements generation at a glance. In Proceedings of the European Conference on Computer Vision. 538–553.

3. Lele Chen, Ross K. Maddox, Zhiyao Duan, and Chenliang Xu. 2019. Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7824–7833.

4. Lele Chen, Sudhanshu Srivastava, Zhiyao Duan, and Chenliang Xu. 2017. Deep cross-modal audio-visual generation. In Proceedings of the Thematic Workshops of ACM Multimedia. 349–357.

5. Weicong Chen, Xu Tan, Yingce Xia, Tao Qin, Yu Wang, and Tie-Yan Liu. 2020. DualLip: A system for joint lip reading and generation. In Proceedings of the ACM International Conference on Multimedia. 1985–1993.

Cited by 7 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. DialogueNeRF: towards realistic avatar face-to-face conversation video generation;Visual Intelligence;2024-08-07

2. Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-06-17

3. Jointly Harnessing Prior Structures and Temporal Consistency for Sign Language Video Generation;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-03-26

4. Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-03-08

5. Audio2AB: Audio-driven collaborative generation of virtual character animation;Virtual Reality & Intelligent Hardware;2024-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3