Abstract
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word “cat” both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.
Funder
NYUAD Research Institute
The William Orr Dingwall dissertation fellowship in the foundation of language