Abstract
Face identification, matching, and verification are fundamental tasks in daily life and are important in social, forensic, and clinical settings. Accurate performance of these tasks requires robust representations of identity that are invariant to environmental changes in lighting, viewpoint, size, etc., much like in the general case of object detection and recognition. Feature detection and comparison are important to accomplishing these tasks and are well-studied both behaviorally and neuroscientifically. However, faces themselves also change in ways that are material to identity and appearance — e.g., by wrinkling, tanning, microblading, shaving, scarring, growing out a beard, losing weight, and getting a face lift. Accurate performance of face-based tasks therefore requires not only the available face features and context, but also a robust model of face-specific causal forces. Here, we cast the tasks of face identification, matching, and verification as problems of casual inference. We model identity as a probability distribution over a metric face space and identity-based tasks as Bayesian model comparison across intuitive causal psychological models of how faces change and are sampled in identity-based tasks. The problem of face matching, for example, can be expressed as comparison between two models of how the face images were generated: (1) they were sampled from two distinct identity distributions or (2) one was sampled from an identity distribution and the other is a transformation of that sample via one or more causal mechanisms known to the observer. Through computational modeling and demonstrations, we show that observers have a rich understanding of the causal mechanisms that affect identity and appearance and can use that knowledge to make accurate inferences unattainable by approaches that rely only on feature detection and comparison.