Abstract
Abstract
We examined the cognitive resources
involved in processing speech with gesture compared to the same speech without
gesture across four studies using a dual-task paradigm. Participants viewed videos of a woman describing
spatial arrays either with gesture or without. They then attempted to choose the
target array from among four choices. Participants’ cognitive load was measured
as they completed this comprehension task by measuring how well they could
remember the location and identity of digits in a secondary task. We found that addressees experience additional visuospatial load when processing gestures compared to speech alone, and that the load primarily comes when addressees attempt to use their memory of the descriptions with gesture to choose the target array. However,
this cost only occurs when gestures about horizontal spatial relations (i.e.,
left and right) are produced from the speaker’s egocentric perspective.
Publisher
John Benjamins Publishing Company
Subject
Linguistics and Language,Experimental and Cognitive Psychology,Communication,Cultural Studies
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献