Abstract
Unfamiliar face identity processing is highly variable across individuals. For many years, studies have aimed to determine which factors are responsible for successfully accomplishing this task. Experiments have been carried out under highly controlled experimental settings. While on one hand these tests can help isolating different variables influencing face processing, on the other hand they confront observers with unrealistic situations and stimuli. Therefore, the degree to which the observed in-lab performance could provide information on real-life efficiency remains unclear. Here, we present normative data of a large group of individuals for two ecologically valid, but underused, tests of unfamiliar face matching. The Facial Identity Card Sorting Test (FICST; n=218) (Jenkins et al., 2011) assesses the ability to process facial identity despite superficial image variations, while the Yearbook Test (YBT; n=252) (Bruck et al., 1991) investigates the impact of aging-related changes in facial appearance. Additionally, a subsample of these observers (n=181) also took part in three more commonly used tests: one assessing face recognition (Cambridge Face Memory Test long form, CFMT+) and two testing face perception (Expertise in Facial Comparison Test, EFCT; Person Identification Challenge Test, PICT). Focusing on the top performers for each test, we found that, compared to the EFCT and PICT, YBT and FICST provide a better prediction of the top performers at the CFMT+ and vice-versa. Our observations indicate that assessment of individuals’ unfamiliar face identity processing abilities should be carried out using multiple tests addressing different aspects. Moreover, if we wish to use in-lab performance to predict individuals’ real-life face processing proficiency, standard and controlled tests should be paired with more ecological assessments resembling real-life challenges.