Abstract
Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models’ ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative–generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models’ inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.
Funder
Nvidia
National Science Foundation
Publisher
Proceedings of the National Academy of Sciences
Reference46 articles.
1. Deep neural networks: A new framework for modeling biological vision and brain information processing;Kriegeskorte;Annu. Rev. Vis.,2015
2. Using goal-driven deep learning models to understand sensory cortex
3. T. C. Kietzmann , P. McClure , N. Kriegeskorte , “Deep neural networks in computational neuroscience” in Oxford Research Encyclopedia of Neuroscience, (Oxford University Press, 2019).
4. J. Jo , Y. Bengio , Measuring the tendency of CNNs to learn surface statistical regularities. arXiv:1711.11561 (30 November 2017).
5. A. Ilyas , Adversarial Examples Are Not Bugs, They Are Features in Advances in Neural Information Processing Systems 32, H. Wallach , Eds. (Curran Associates, Inc., 2019), pp. 125–136.
Cited by
52 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献