Affiliation:
1. Carleton University, Ottawa, Canada
Abstract
Speech-based interaction is often recognized as appropriate for hands-busy, eyes-busy multitask situations. The objective of this study was to explore prompt-guided speech-based interaction and the impact of prompt modality on overall performance in such situations. A dual-task paradigm was employed, with tracking as a primary task and speech-based data input as a secondary task. There were three tracking conditions: no tracking, basic, and difficult tracking. Two prompt modalities were used for the speech interaction: a dialogue with spoken prompts and a dialogue with visual prompts. Data entry duration was longer with the speech prompts than with the visual prompts, regardless of whether or not there was tracking or its level of difficulty. However, when tracking was difficult, data entry duration was similar for both spoken and visual prompts. Tracking performance was also affected by the prompt modality, with poorer performance obtained when the prompts were visual. The findings are discussed in terms of multiple resource theory and the possible implications for speech-based interactions in multitask situations. Actual or potential applications of this research include the design of speech-based dialogues for multitask situations such as driving and other hands-busy, eyes-busy situations.
Subject
Behavioral Neuroscience,Applied Psychology,Human Factors and Ergonomics
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献