Affiliation:
1. Tokyo Institute of Technology
2. CUHK - Sensetime Joint Lab, The Chinese University of Hong Kong
3. S-Lab, Nanyang Technological University
Abstract
What can we picture solely from a clip of speech? Previous research has shown the possibility of directly inferring the appearance of a person's face by listening to a voice. However, within human speech lies not only the biometric identity signal but also the identity-irrelevant information such as the talking content. Our goal is to extract as much information from a clip of speech as possible. In particular, we aim at not only inferring the face of a person but also animating it. Our key insight is to synchronize audio and visual representations from two perspectives in a style-based generative framework. Specifically, contrastive learning is leveraged to map both the identity and speech content information within the speech to visual representation spaces. Furthermore, the identity space is strengthened with class centroids. Through curriculum learning, the style-based generator is capable of automatically balancing the information from the two latent spaces. Extensive experiments show that our approach encourages better speech-identity correlation learning while generating vivid faces whose identities are consistent with given speech samples. Moreover, by leveraging the same model, these inferred faces can be driven to talk by the audio.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Deep Learning for Visual Speech Analysis: A Survey;IEEE Transactions on Pattern Analysis and Machine Intelligence;2024-09
2. AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation;IEEE Access;2024
3. Generative Networks;Handbook of Face Recognition;2023-12-30
4. Generating Talking Facial Videos Driven by Speech Using 3D Model and Motion Model;2023 3rd International Symposium on Computer Technology and Information Science (ISCTIS);2023-07-07
5. Parametric Implicit Face Representation for Audio-Driven Facial Reenactment;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06