Abstract
AbstractUnderstanding 3D representations of spatial information, particularly in naturalistic scenes, remains a significant challenge in vision science. This is largely because of conceptual difficulties in disentangling higher-level 3D information from co-occurring features and cues (e.g., the 3D shape of a scene image is necessarily defined by spatial frequency and orientation information). Recent work has employed newer models and analysis techniques that attempt to mitigate these in-principle difficulties. For example, one such study reported 3D-surface features were uniquely present in areas OPA, PPA, and MPA/RSC (areas typically referred to as ‘scene- selective’), above and beyond a Gabor-wavelet baseline (“2D”) model. Here, we tested whether these findings generalized to a new stimulus set that, on average, dissociated static Gabor- wavelet baseline (“2D”) features from 3D scene-surface features. Surprisingly, we found evidence that a Gabor-wavelet baseline model better fit voxel responses in areas OPA, PPA and MPA/RSC compared to a model with 3D-surface information. This raises the question of whether previous findings of greater 3D information could have been due to a baseline condition that didn’t model some potentially critical low-level features (e.g., motion). Our findings also emphasize that much of the information in “scene-selective” regions—potentially even information about 3D surfaces—may be in the form of spatial frequency and orientation information often considered 2D or low-level, and they highlight continued fundamental conceptual challenges in disentangling the contributions of low-level vs. high-level features in visual cortex.
Publisher
Cold Spring Harbor Laboratory