Abstract
Face perception is the basis of many types of social information exchange, but there is controversy over its underlying mechanisms. Researchers have theorized two processing pathways underlying facial perception: configural processing and featural processing. Featural processing focuses on the individual features of a face, whereas configural processing focuses on the spatial relations of features. To resolve the debate on the relative contribution of the two pathways in face perception, researchers have proposed a dual processing model that the two pathways contribute to two different perceptions, detecting face-like patterns and identifying individual faces. The dual processing model is based on face perception experiments that primarily use static faces. As we mostly interact with dynamic faces in real life, the generalization of the model to dynamic faces will advance our understanding of how faces are perceived in real life. This paper proposes a refined dual processing model of dynamic face perception, in which expertise in dynamic face perception supports identifying individual faces, and it is a learned behaviour that develops with age. Specifically, facial motions account for the advantages of dynamic faces, compared to static faces. This paper highlights two intrinsic characteristics of facial motions that enable the advantages of dynamic faces in face perception. Firstly, facial motion provides facial information from various viewpoints, and thus supports the generalization of face perception to the unlearned view of faces. Secondly, distinctive motion patterns serve as a cue to the identity of the face.
Publisher
University of Toronto Libraries - UOTL