Abstract
AbstractMultisensory integration is a process of redundancy exploitation, in which our brains combine information across the senses to obtain more reliable perceptual estimates. While the high-level computational principles of multisensory integration are well understood, little is knowns as to how the low-level properties of the signals ultimately determine the integrated percept. This study demonstrates that a bottom-up approach, based on luminance- and sound-level analyses, is sufficient to jointly explain the spatiotemporal determinants of audiovisual integration and crossmodal attention. When implemented using an architecture analogous to the motion detectors found in the insect brain, such low-level analyses can broadly reproduce human behaviour–as tested in a large-scale simulation of 42 classic experiments on the spatial, temporal and attentional aspects of multisensory integration.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献