Abstract
AbstractRevealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Funder
Fonds de Recherche du Québec - Nature et Technologies
National Science Foundation
Publisher
Springer Science and Business Media LLC
Reference103 articles.
1. Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. (Henry Holt and Co., 1982).
2. Pylyshyn, Z. W. Computation and cognition: Issues in the foundations of cognitive science. Behav. Brain Sci. 3, 111–132 (1980).
3. Schyns, P. G., Gosselin, F. & Smith, M. L. Information processing algorithms in the brain. Trends Cogn. Sci. 13, 20–26 (2009).
4. Wiener, N. Nonlinear Problems in Random Theory. (Wiley, 1958).
5. Ahumada Jr, A. J. Perceptual classification images from Vernier acuity masked by noise. Perception 25, (ECVP Abstract Supplement, 1996).