Author:
Wundari Bayu Gautama,Fujita Ichiro,Ban Hiroshi
Abstract
AbstractSeeing three-dimensional objects requires multiple stages of representational transformation, beginning in the primary visual cortex (V1). Here, neurons compute binocular disparity from the left and right retinal inputs through a mechanism similar to local cross-correlation. However, correlation-based representation is ambiguous because it is sensitive to disparities in both similar and dissimilar features between the eyes. Along the visual pathways, the representation transforms to a cross-matching basis, eliminating responses to falsely matched disparities. We investigated this transformation across the human visual areas using functional magnetic resonance imaging (fMRI) and computational modeling. By fitting a linear weighted sum of cross-correlation and cross-matching model representations to the brain’s representational structure of disparity, we found that areas V1-V3 exhibited stronger cross-correlation components, V3A/B, V7, and hV4 were slightly inclined towards cross-matching, and hMT+ was strongly engaged in cross-matching. To explore the underlying mechanism, we identified a deep neural network optimized for estimating disparity in natural scenes that matched human depth judgment in the random-dot stereograms used in the fMRI experiments. Despite not being constrained to match fMRI data, the network units’ responses progressed from cross-correlation to cross-matching across layers. Activation maximization analysis on the network suggests that the transformation incorporates three phases, each emphasizing different aspects of binocular similarity and dissimilarity for depth extraction. Our findings suggest a systematic distribution of both components throughout the visual cortex, with cross-matching playing a greater role in areas anterior to V3, and that the transformation exploits responses to false matches rather than discarding them.Significant StatementHumans perceive the visual world in 3D by exploiting binocular disparity. To achieve this, the brain transforms neural representation from the cross-correlation of signals from both eyes into a cross-matching representation, filtering out responses to disparities from falsely matched features. The location and mechanism of this transformation in the human brain are unclear. Using fMRI, we demonstrated that both representations were systematically distributed across the visual cortex, with cross-matching exerting a stronger effect in cortical areas anterior to V3. A neural network optimized for disparity estimation in natural scenes replicated human depth judgment in various stereograms and exhibited a similar transformation. The transformation from correlation to matching representation may be driven by performance optimization for depth extraction in natural environments.
Publisher
Cold Spring Harbor Laboratory