Abstract
When we look at a face, we cannot help but ‘read’ it: beyond simply processing its identity, we also form robust impressions of both transient emotional states (e.g. surprise) and stable personality traits (e.g. trustworthiness). But perhaps the most fundamental and salient traits we extract from faces reflect their social demographics — e.g. race, age, and gender. Our interpretations of these properties have deep consequences for how we interact with other people. But just how are such features extracted by perceptual (and cognitive) processing? Curiously, despite a vast amount of work on higher-level social properties (such as competence and dominance), there has been very little work looking at the visual perception of basic demographic properties. Across several experiments, we tested how quickly demographic properties are extracted when viewing faces. Observers viewed unfamiliar full-color photographs of faces for variable durations, after which they were masked. We then correlated percepts of race, age, or gender from those faces with the same percepts that occurred during independent unspeeded (and unmasked) judgments. The results clearly demonstrated that demographic features are extracted highly efficiently: observers showed near-perfect agreement with their own unspeeded judgments (and with the ground truth) with only 50 ms of exposure — and even (in the cases of race and gender) by 34 ms. This was true even when the property to be reported wasn’t revealed until the face had disappeared. We also replicated these results in an independent group of observers who viewed faces that were tightly cropped and matched for mean luminance, thus controlling for several lower-level visual properties. Critically, we also observed much slower and less accurate performance for inverted faces, signaling a role for holistic processing. Collectively, these results demonstrate that the visual system is especially fast and efficient at extracting demographic features from faces at a glance.
Acknowledgement: SU was supported by an NSF Graduate Research Fellowship. BJS was supported by ONR MURI #N00014-16-1-2007.