Affiliation:
1. University of California, Riverside
Abstract
Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual speech integration. However, recent findings suggest that speech integration can be influenced by higher cognitive properties such as lexical status and semantic context. Proponents of amodal accounts will need to explain these results.
Reference24 articles.
1. Lexical Influences in Audiovisual Speech Perception.
2. Burnham D., Ciocca V., Lauw C., Lau S., Stokes S. (2000). Perception of visual information for Cantonese tones. InBarlow M., Rose P.(Eds.), Proceedings of the Eighth Australian International Conference on Speech Science and Technology (pp. 86–91). Canberra: Australian Speech Science and Technology Association.
Cited by
141 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献