Affiliation:
1. University of Michigan, Ann Arbor, MI
Abstract
Visual data overload and associated performance breakdowns in safety-critical environments have triggered a significant interest in developing multimodal displays, i.e., displays that distribute information across multiple sensory channels (mainly vision, hearing, and touch). Yet, more than 95% of studies on multimodal information processing suffer from a methodological shortcoming: the failure to perform ‘crossmodal matching’ where participants equate the perceived intensities of stimuli across sensory channels in advance of an experiment, with the goal to avoid confounding modality with salience. Currently, there is no agreed-upon technique for performing this task, and the few studies that included this step employed different methods. The goal of this study is to compare three crossmodal matching techniques to determine whether they result in useful and congruent outcomes. In particular, the degree of intra-individual variability of crossmodal matches is of interest. Eighteen participants performed a series of 54 crossmodal matches for visual, auditory, and tactile stimuli. They used a mouse and visual sliding scale, keyboard arrows, or a rotary knob to adjust intensity. Intra-individual variability of matches differed significantly as a function of matching technique and the order in which stimuli are presented. These findings confirm the need for developing an agreed-upon reliable crossmodal matching technique for use in future studies.
Subject
General Medicine,General Chemistry
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献