Author:
Park Se Jin,Kim Minsu,Hong Joanna,Choi Jeongsoo,Ro Yong Man
Abstract
The challenge of talking face generation from speech lies in aligning two different modal information, audio and video, such that the mouth region corresponds to input audio. Previous methods either exploit audio-visual representation learning or leverage intermediate structural information such as landmarks and 3D models. However, they struggle to synthesize fine details of the lips varying at the phoneme level as they do not sufficiently provide visual information of the lips at the video synthesis step. To overcome this limitation, our work proposes Audio-Lip Memory that brings in visual information of the mouth region corresponding to input audio and enforces fine-grained audio-visual coherence. It stores lip motion features from sequential ground truth images in the value memory and aligns them with corresponding audio features so that they can be retrieved using audio input at inference time. Therefore, using the retrieved lip motion features as visual hints, it can easily correlate audio with visual dynamics in the synthesis step. By analyzing the memory, we demonstrate that unique lip features are stored in each memory slot at the phoneme level, capturing subtle lip motion based on memory addressing. In addition, we introduce visual-visual synchronization loss which can enhance lip-syncing performance when used along with audio-visual synchronization loss in our model. Extensive experiments are performed to verify that our method generates high-quality video with mouth shapes that best align with the input audio, outperforming previous state-of-the-art methods.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
30 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. CMFF-Face: Attention-Based Cross-Modal Feature Fusion for High-Quality Audio-Driven Talking Face Generation;Proceedings of the 2024 International Conference on Multimedia Retrieval;2024-05-30
2. Lip and Speech Synchronization using Supervised Contrastive Learning and Cross-Modal Attention;2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG);2024-05-27
3. Multimodal Synchronization Detection: A Transformer-Based Approach Using Deep Metric Learning;2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT);2024-05-03
4. Text-Driven Talking Face Synthesis by Reprogramming Audio-Driven Models;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
5. Exploring Phonetic Context-Aware Lip-Sync for Talking Face Generation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14