Abstract
Current models of face recognition are primarily concerned with the role of perceptual experience and the nature of the perceptual representation that enables face identification. These models overlook the main goal of the face recognition system, which is to recognize socially relevant faces. We therefore propose a new account of face recognition according to which faces are learned from concepts to percepts. This account highlights the critical contribution of the conceptual and social information that is associated with faces to face recognition. Our recent studies show that conceptual/social information contributes to face recognition in two ways: First, faces that are learned in social context are better recognized than faces that are learned based on their perceptual appearance. These findings indicate the importance of converting faces from a perceptual to a social representation for face recognition. Second, we found that conceptual information significantly accounts for the visual representation of faces in memory, but not in perception. This was the case both based on human perceptual and conceptual similarity ratings as well as the representations that are generated by unimodal deep neural networks that represent faces based on visual information alone, and multi-model networks that represent visual and conceptual information about faces. Taken together, we propose that the representation that is generated for faces by the perceptual and memory systems is determined by social/conceptual factors, rather than our passive perceptual experience with faces per se.