Abstract
A key theoretical challenge in human face recognition is to determine what information is critical for judgements of identity. For example, as we move about or as gaze or expression changes, the size and shape of a face image on the retina also changes. The visual system must ignore these ambient sources of image variation to facilitate recognition. In this study, we used principal components analysis to reveal the image dimensions from a large set of naturally varying face images. In Experiment 1 (n=78), we asked how the recognition of familiar faces was affected when we systematically removed image dimensions from faces. We found that recognition increased when the early image dimensions were removed. These image dimensions would appear to reflect ambient variation in images that is not important for recognition. However, recognition of faces then decreased when intermediate dimensions were removed, suggesting that these image dimensions contain the critical information for recognizing familiar faces. In Experiment 2 (n=102), we asked the orthogonal question of what image dimensions are important when learning new faces. Again, we found that removing early image dimensions from the training images had a minimal effect on learning new faces (when tested with unmanipulated images). In contrast, removing an intermediate band of image dimensions significantly reduced subsequent recognition of learnt faces. Finally, in Experiment 3 (n=78), we asked whether these critical intermediate image dimensions are organized according to a norm-based or an exemplar-based model. The prediction from a norm-based model was that recognition should increase when the intermediate image dimensions are caricatured. However, we found that recognition rates decreased when the critical intermediate dimensions were caricatured. These findings support an exemplar-based model in which a narrow band of image dimensions are critical for the learning and the subsequent recognition of face identity.