Author:
Varano Enrico,Vougioukas Konstantinos,Ma Pingchuan,Petridis Stavros,Pantic Maja,Reichenbach Tobias
Abstract
Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker’s face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person’s face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.
Funder
Engineering and Physical Sciences Research Council
Royal British Legion
Reference36 articles.
1. User evaluation of the synface talking head telephone;Agelfors;Computers Helping People with Special Needs. ICCHP 2006, Lecture Notes in Computer Science,2006
2. Perception of audiovisual speech produced by human and virtual speaker.;Aller;Hum. Lang. Technol.,2016
3. LipNet: end-to-end sentence-level lipreading.;Assael;arXiv,2016
4. Audiovisual speech synthesis.;Bailly;Int. J. Speech Technol.,2003
5. Articulation strength-readability experiments with a synthetic talking face;Beskow;Proceedings of the Fonetik 2002, May 29-31, TMH-QPSR,KTH,2002
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献