Abstract
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. A popular view holds that optimal DNNs compress their representations onto low-dimensional subspaces to achieve invariance and robustness, which suggests that better models of visual cortex should have lower dimensional geometries. Surprisingly, we found a strong trend in the opposite direction—neural networks with high-dimensional image subspaces tended to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data. Moreover, we found that high dimensionality was associated with better performance when learning new categories of stimuli, suggesting that higher dimensional representations are better suited to generalize beyond their training domains. These findings suggest a general principle whereby high-dimensional geometry confers computational benefits to DNN models of visual cortex.
Publisher
Public Library of Science (PLoS)
Reference90 articles.
1. A deep learning framework for neuroscience;BA Richards;Nature Neuroscience,2019
2. Using goal-driven deep learning models to understand sensory cortex;DLK Yamins;Nature Neuroscience,2016
3. Neuroscience-Inspired Artificial Intelligence;D Hassabis;Neuron,2017
4. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing;N Kriegeskorte;Annu Rev Vis Sci,2015
5. Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future;G Lindsay;Journal of Cognitive Neuroscience,2020
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献