Abstract
This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux?
Subject
General Agricultural and Biological Sciences,General Biochemistry, Genetics and Molecular Biology
Reference73 articles.
1. Model-based analysis synthesis image coding (MBASIC) system for a person's face
2. Attitude Changes Following Speechreading Training
3. Crossm odal integration in the iden tification of consonant segments. Q. J l exp;Braida L.;Psychol.,1991
4. S peechreading supplem ented w ith frequency-selective am plitude-envelope inform ation. J. acoust;Breeuwer M.;Soc. Am.,1984
5. Brooke N .M . 1991 C om puter graphics anim ations of speech production. In Advances in speech hearing and language processing vol. 2 (ed. W. A. A insw orth). JA I Press. (In the press.)
Cited by
261 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献