Abstract
AbstractCombining information from multiple senses is essential to object recognition. Yet how the mind combines sensory input into coherent multimodal representations – the multimodal binding problem – remains poorly understood. Here, we applied multi-echo fMRI across a four-day paradigm, in which participants learned 3-dimensional multimodal object representations created from well-characterized visual shape and sound features. Our novel paradigm decoupled the learned multimodal object representations from their baseline unimodal shape and sound features, thus tracking the emergence of multimodal concepts as they were learned by healthy adults. Critically, the representation for the whole object was different from the combined representation of its individual parts, with evidence of an integrative object code in anterior temporal lobe structures. Intriguingly, the perirhinal cortex – an anterior temporal lobe structure – was by default biased towards visual shape, but this initial shape bias was attenuated with learning. Pattern similarity analyses suggest that after learning the perirhinal cortex orthogonalized combinations of visual shape and sound features, transforming overlapping feature input into distinct multimodal object representations. These results provide evidence of integrative coding in the anterior temporal lobes that is distinct from the distributed sensory features, advancing the age-old question of how the mind constructs multimodal objects from their component features.
Publisher
Cold Spring Harbor Laboratory
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献