Abstract
How is task-dependent feature diagnosticity reflected in the behavioral and neural patterns underlying face perception? We investigated the neural code subserving face processing using a computational framework developed by Ullman, Vidal-Naquet and Sali (2002). Their method, originally designed for automatic face detection but inspired by human face processing, is extended to the task of face individuation. We computed the diagnosticity of facial features (image fragments) for individuation by means of the mutual information between face identity and fragment presence across a set of faces displaying variation in pose and expression. We found that individuation diagnosticity varies systematically with feature size and location across the face. Behavioral results from an individuation task with fragments of equal size indicate that human observers are sensitive to the informativeness of facial features as measured by the algorithm: participants were faster and more accurate at individuating faces with increasing feature diagnosticity. Functional MRI was then used to explore whether this sensitivity is mirrored at the level of neural processing in face-selective areas, for example, right fusiform gyrus. The results above are further extended using a face detection task. We observe that diagnosticity for face detection also modulates behavioral indices of performance along with neural responses. Finally, we examine the differences between results obtained with the two tasks. Our results reinforce the idea that feature codes for object recognition are computed in a task-specific manner and suggest that image fragments provide a functionally meaningful descriptor of the representations used by our visual systems. More generally, we conclude that this computational framework provides an effective tool for modeling visual object recognition in humans, as well as a bridge to automatic recognition systems.