Abstract
AbstractHuman brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective, which is a remarkable feat of cognitive processing. While previous studies have delved into this phenomenon, our study uses an innovative approach to disentangle neural representations of object real-world size from visual size and perceived real-world depth in a way that was not previously possible. Our multi-modal approach incorporates computational modeling and the THINGS EEG2 dataset, which offers both high time-resolution human brain recordings and more ecologically valid naturalistic stimuli. Leveraging this state-of-the-art dataset, our EEG representational similarity results revealed a pure representation of object real-world size in human brains. We report a representational timeline of visual object processing: pixel-wise differences appeared first, then real-world depth and visual size, and finally, real-world size. Furthermore, representational comparisons with different artificial neural networks reveal real-world size as a stable and higher-level dimension in object space incorporating both visual and semantic information.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Design and Optimization of Novel Laser Reduced Graphene Oxide Sensor for Neural Signal Investigation;2024 6th International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE);2024-02-29