Affiliation:
1. Concordia University and the Center for Research in Human Development, Montréal, Québec, Canada
2. Université de Montréal and the Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
Abstract
PurposeUsing a dual-task paradigm, 2 experiments (Experiments 1 and 2) were conducted to assess differences in the amount of listening effort expended to understand speech in noise in audiovisual (AV) and audio-only (A-only) modalities. Experiment 1 had equivalent noise levels in both modalities, and Experiment 2 equated speech recognition performance levels by increasing the noise in the AV versus A-only modality.MethodSixty adults were randomly assigned to Experiment 1 or Experiment 2. Participants performed speech and tactile recognition tasks separately (single task) and concurrently (dual task). The speech tasks were performed in both modalities. Accuracy and reaction time data were collected as well as ratings of perceived accuracy and effort.ResultsIn Experiment 1, the AV modality speech recognition was rated as less effortful, and accuracy scores were higher than A only. In Experiment 2, reaction times were slower, tactile task performance was poorer, and listening effort increased, in the AV versus the A-only modality.ConclusionsAt equivalent noise levels, speech recognition performance was enhanced and subjectively less effortful in the AV than A-only modality. At equivalent accuracy levels, the dual-task performance decrements (for both tasks) suggest that the noisier AV modality was more effortful than the A-only modality.
Publisher
American Speech Language Hearing Association
Subject
Speech and Hearing,Linguistics and Language,Language and Linguistics
Cited by
136 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献