Abstract
Face recognition abilities vary greatly among neurotypical people, but the causes of these variations are not well understood. Here, we show that for face recognition, the use of specific visual information from faces predicts abilities for identifying faces. Abilities were measured in 96 adult participants prior to evaluating the visual information they use. Included in these participants were two developmental prosopagnosics (DP) and six super-recognizers (SR): groups of people with extraordinarily low (DP) or great (SR) face recognition abilities. These two groups represent extremes of the ability spectrum. Utilization of visual information was assessed using the Bubbles method. In 1000 trials, participants were asked to identify a known celebrity's face. The stimuli were spatially sampled by applying a mask with randomly positioned Gaussian windows (bubbles) revealing visual information from the faces at five spatial scales. A regression was then applied between the sampled information and accuracy on each trial, determining which information was systematically sampled from each of the scales when participants correctly identified faces. The result is a z-scored classification image showing visual information used by each participant. Subsequently, the pixels in these classification images were divided in a regular grid containing 35 rectangles. A second-order regression was then applied between ability scores and average z-scores of utilization of pixels in each rectangle, for each scale. In the two finest scales, the extent at which specific spatial visual information is used can predict abilities. In the finest scale (28-56 cycles per face [cpf]), best predictors are the eye areas, nose and left side of face (adjusted R-squared=0.34; p=.002). In the second finest (14-28 cpf), the best predictor is the use of the left eye (adjusted R-squared=0.44; p< .001). We conclude that individual differences in face recognition abilities are partly explained by the extent at which specific face information is used.
Meeting abstract presented at VSS 2017