Abstract
Our ability to recognize faces under different viewing conditions is both superb and nearly instantaneous. However, disguised faces seem to be the exception to this rule. Specifically, recognition accuracy for previously-viewed individuals can decrease dramatically when their facial features (hairstyle, eyeglasses, and facial hair) are altered or disguised (added, subtracted, or changed; Cheng & Tarr, VSS 2003). Here we investigate whether it is possible to train observers to better recognize such disguised faces. We evaluated four training protocols designed to improve explicit recognition of faces in which one or more features have been altered between study and test. Performance was measured using an old/new recognition memory paradigm. Protocol 1 tested whether foreknowledge of disguises would affect performance: observers were told explicitly that the faces from the learning phase would be disguised when seen in the test phase. Protocol 2 tested whether a more invariant representation for an individual would be learned if the same face was studied with several different disguises. Protocol 3 tested whether disguise-invariant representations are learned when the same, unaltered face is studied in multiple viewpoints. Protocol 4 tested whether caricaturing the disguised test faces would improve performance, the logic being that caricatured faces are typically recognized faster and more accurately than veridical versions of the same faces (e.g., Rhodes et al., 1987; Benson & Perrett, 1991). These four training protocols were also compared to a control condition — passive viewing of faces followed by a surprise recognition memory test. Observers' performance in these four training conditions suggests that some types of training can improve recognition of faces across disguises. Such results have implications for both our understanding of the process of face recognition and applying this understanding to problems in the real world.
Funded by the Perceptual Expertise Network (#15573–S6), a collaborative award from James S. McDonnell Foundation, and NSF award #BCS–0094491.