Abstract
How information sampling contributes to accuracy in face recognition remains unknown. Here we address this question by assessing the computational value of face information actively sampled by human observers. Computational value is assessed by measuring the identity matching accuracy of Deep Neural Networks (DNNs) using face information that had been sampled by human observers in a prior study. Using eye-tracking data from this prior study, we reconstructed visual information sampled during the learning phase of a face recognition task, where super-recognizers and typical viewers had viewed faces through gaze-contingent viewing apertures. To generate static images representing their sampled information, we convolved face information with retinal filters centered on gaze fixations. After controlling for the amount of face information available at the trial level, we found improved identity matching accuracy in 9 DNNs when using human-guided visual sampling compared to fixations that were randomly distributed on the face stimuli. Importantly, DNNs’ were also sensitive to individual differences in viewers' face identity processing ability, showing superior accuracy when using visual information sampled by super-recognizers as compared to typical viewers. These findings confirm that humans preferentially sample face parts that contain more computationally valuable face identity information. Moreover, super-recognizers sample face information of higher computational value compared to typical viewers. This implies that super-recognizers superior ability is partly due to the visual information they sample during face learning.