Affiliation:
1. University of Victoria
2. University of British Columbia
3. Haskins Laboratories
Abstract
Abstract
Language learning is a multimodal endeavor; to improve their pronunciation in a new language, learners access not only auditory information about speech sounds and patterns, but also visual information about articulatory movements and processes. With the development of new technologies in computer-assisted pronunciation training (CAPT) comes new possibilities for delivering feedback in both auditory and visual modalities. The present paper surveys the literature on computer-assisted visual articulation feedback, including direct feedback that provides visual models of articulation and indirect feedback that uses visualized acoustic information as a means to inform articulation instruction. Our focus is explicitly on segmental features rather than suprasegmental ones, with visual feedback conceived of as providing visualizations of articulatory configurations, movements, and processes. In addition to discussing types of visual articulation feedback, we also consider the criteria for effective delivery of feedback, and methods of evaluation.
Publisher
John Benjamins Publishing Company
Reference67 articles.
1. Visual feedback and the acquisition of intonation;Abberton,1975
2. Assessing the effects of phonetic training on L2 sound perception and production;Aliaga-García,2009
3. An Experimental Pitch Indicator for Training Deaf Scholars
4. Visual articulatory feedback for phonetic correction in second language learning;Badin;Proceedings of the Workshop on Second Language Studies: Acquisition, Learning, Education, and Technology,2010
Cited by
26 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献