Abstract
Prediction is often used during language comprehension. However, studies of prediction have tended to focus on L1 listeners in quiet conditions. Thus, it is unclear how listeners predict outside the laboratory and in specific communicative settings. Here, we report two eye-tracking studies which used a visual-world paradigm to investigate whether prediction during a consecutive interpreting task differs from prediction during a listening task in L2 listeners, and whether L2 listeners are able to predict in the noisy conditions that might be associated with this communicative setting. In a first study, thirty-six Dutch-English bilinguals either just listened to, or else listened to and then consecutively interpreted, predictable sentences presented on speech-shaped sound. In a second study, another thirty-six Dutch-English bilinguals carried out the same tasks in clear speech. Our results suggest that L2 listeners predict the meaning of upcoming words in noisy conditions. However, we did not find that predictive eye movements depended on task, nor that L2 listeners predicted upcoming word form. We also did not find a difference in predictive patterns when we compared our two studies. Thus, L2 listeners predict in noisy circumstances, supporting theories which posit that prediction regularly takes place in comprehension, but we did not find evidence that a subsequent production task or noise affects semantic prediction.
Funder
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
Publisher
Public Library of Science (PLoS)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献