Abstract
It is now well established that face information is represented at multiple spatial scales. However, research on face recognition has so far lacked a technique that identifies the specific information that humans locally represent at different scales, for different face categorization tasks. To address this issue, we used the Bubbles technique of Gosselin and Schyns (in press) in three different categorization tasks (identity, gender, and expressive or not) of 20 face stimuli. To compute the experimental stimuli, we decomposed the original faces into 6 bands of spatial frequencies of one octave each-at 2.81, 5.62, 11.25, 22.5, 45 and 90 cycles per face, from coarse to fine, respectively. Information at each spatial frequency bandwidth was partially revealed by a number of randomly located Gaussian bubbles forming a mask (standard deviations of bubbles were 2.15, 1.08, .54, .27, and .13 deg of visual angle, from coarse to fine scales, to normalize to 3 the number of cycles per bubble revealed). To generate an experimental stimulus, we simply added the information revealed at each scale (the number of bubbles per image was automatically adjusted to reveal just enough face information to maintain a 75% correct categorization criterion). Three independent groups of 20 subjects and three ideal observers resolved a different categorization of the same faces (identification, gender, and expressive or not). For each spatial scale, Bubbles isolated the different information human and ideal observers used to resolve these categorizations. Using this information, we synthesized the effective stimuli of each task, with humans and ideals, for comparison purposes.