Abstract
Abstract
Bowers et al. argue that deep neural networks (DNNs) are poor models of biological vision because they often learn to rival human accuracy by relying on strategies that differ markedly from those of humans. We show that this problem is worsening as DNNs are becoming larger-scale and increasingly more accurate, and prescribe methods for building DNNs that can reliably model biological vision.
Funder
National Science Foundation
Publisher
Cambridge University Press (CUP)
Subject
Behavioral Neuroscience,Physiology,Neuropsychology and Physiological Psychology
Reference31 articles.
1. Kim, J. , Linsley, D. , Thakkar, K. , & Serre, T. (2020). Disentangling neural mechanisms for perceptual grouping. In Z. Chen, J. Zhang, M. Arjovsky, & L. Bottou (Eds.), International Conference on Learning Representations, Addis Abada, Ethopia.
2. Kumar, M. , Houlsby, N. , Kalchbrenner, N. , & Cubuk, E. D. (2022). Do better ImageNet classifiers assess perceptual similarity better? https://openreview.net › forumhttps://openreview.net › forum. https://openreview.net/pdf?id=qrGKGZZvH0
3. Slow Feature Analysis: Unsupervised Learning of Invariances
4. Not-So-CLEVR: learning same–different relations strains feedforward neural networks