Abstract
Successful face recognition involves the ability to generalize recognition across within-person variability in appearance (e.g., lighting, expressions). Research shows we are less tolerant of within-person variability in unfamiliar faces1 and particularly for other-race faces2,3. Less is known about how face similarity influences adults’ ability to build stable face representations that are tolerant to variability in appearance, and whether this process differs for own- versus other-race faces. We applied a sorting task1 to assess people’s ability to recognize faces across variability. White and East Asian adults (currently n = 35) were handed a stack of 40 photographs of two identities and were asked to sort them into different piles based on identity, such that each pile contained all photos of the same identity. Adults were not told the correct number of identities and were randomly assigned to sort either own- or other-race faces and pairs that either looked similar or dissimilar. We analyzed the number of perceived identities (i.e., number of piles created) and intrusion errors (i.e., different person/same pile). Preliminary analysis based on data from the White participants (n = 27) indicated no significant main effects or interaction between face race and similarity on the number of perceived identities (ps > .46; Figure 1). However, we found a significant interaction between face race and similarity on intrusion errors (F(1) = 4.78, p = 0.039, η2 = .17; Figure 2). Consistent with past research4, this finding suggests that the ability to discriminate between identities is impaired for other-race compared to own-race faces particularly when the faces look extremely similar (i.e., densely clustered in face space5). People’s ability to form stable facial representations might be influenced by the way in which own- and other-race faces are mentally represented in face space, making this research theoretically important.