Abstract
AbstractAnimal sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in deep neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This result suggests that efficient coding naturally emerges from gradient-like learning on natural stimuli.
Publisher
Cold Spring Harbor Laboratory
Reference72 articles.
1. Task difficulty and the specificity of perceptual learning
2. Perception and discrimination as a function of stimulus orientation: The "oblique effect" in man and animals.
3. Sensitivity to first- and second-order motion and form in children and adults
4. Implicit regularization in deep matrix factorization;Advances in Neural Information Processing Systems,2019
5. Banks MS , Crowell JA (1993) Front-end limitations to infant spatial vision: Examination of two analyses. Early visual development: Normal and abnormal pp 91–116. Publisher: Oxford University Press New York