Abstract
AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervisedsynapticupdates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).
Publisher
Cold Spring Harbor Laboratory
Reference44 articles.
1. Martin Schrimpf , Jonas Kubilius , Ha Hong , Najib J. Majaj , Rishi Rajalingham , Elias B. Issa , Kohitij Kar , Pouya Bashivan , Jonathan Prescott-Roy , Kailyn Schmidt , Daniel L. K. Yamins , and James J. DiCarlo . Brain-Score: Which artificial neural network for object recognition is most brain-like? bioRxiv, 2018.
2. Jonas Kubilius , Martin Schrimpf , Ha Hong , Najib J. Majaj , Rishi Rajalingham , Elias B. Issa , Kohitij Kar , Pouya Bashivan , Jonathan Prescott-Roy , Kailyn Schmidt , Aran Nayebi , Daniel Bear , Daniel L. K. Yamins , and James J. DiCarlo . Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs. In Neural Information Processing Systems (NeurIPS), pp. 12785–12796. 2019.
3. Jia Deng , Wei Dong , Richard Socher , Li-Jia Li , Kai Li , and Li Fei-Fei . ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. 2009.
4. Performance-optimized hierarchical models predict neural responses in higher visual cortex
5. Deep supervised, but not unsupervised, models may explain it cortical representation;PLoS computational biology,2014
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献