Affiliation:
1. University of Michigan, Ann Arbor, MI
Abstract
Multimodal displays, i.e., displays that distribute information across multiple sensory channels (mainly vision, hearing, and touch), have received considerable attention in recent years. To be effective, their design needs to be based on a firm understanding of how information is processed both within and across modalities. However, most studies on crossmodal information processing, to date, suffer from a methodological shortcoming: they fail to perform crossmodal matching to ensure that modality is not confounded with other stimulus properties, such as salience. One reason for this shortcoming is the fact that there is no agreed-upon crossmodal matching technique, and that existing approaches suffer from limitations. The goal of the present study is to develop and validate a more reliable crossmodal matching method that employs repeated matching. To this end, six participants were asked to use this technique and match a series of 54 modality pairings involving vision, audition, and touch. Results show that the intra-individual variability of participants’ matches was significantly less than observed in an earlier technique that involved bidirectional matching and visual feedback. The findings from this research confirm the need for improved crossmodal matching procedures and for employing them in advance of conducting experiments on multisensory information processing.
Subject
General Medicine,General Chemistry
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献