Abstract
Human vision is attuned to the subtle differences between individual faces. Yet we lack a quantitative way of predicting how similar two face images look, or whether they appear to show the same person. Principal-components-based 3D morphable models are widely used to generate stimuli in face perception research. These models capture the distribution of real human faces in terms of dimensions of physical shape and texture. How well does a “face space” defined to model the distribution of faces as an isotropic Gaussian explain human face perception? We designed a behavioural task to collect dissimilarity and same/different identity judgements for 232 pairs of realistic faces. The stimuli densely sampled geometric relationships in a face space derived from principal components of 3D shape and texture (Basel Face Model, BFM). We then compared a wide range of models in their ability to predict the data, including the BFM from which faces were generated, a 2D morphable model derived from face photographs, and image-computable models of visual perception. Euclidean distance in the BFM explained both similarity and identity judgements surprisingly well. In a comparison against 14 alternative models, we found that BFM distance was competitive with representational distances in state-of-the-art image-computable deep neural networks (DNNs), including a novel DNN trained on BFM identities. Models describing the distribution of facial features across individuals are not only useful tools for stimulus generation. They also capture important information about how faces are perceived, suggesting that human face representations are tuned to the statistical distribution of faces.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献