Visual information from different areas of the face does not appear to contribute equally to human observer's ability to process faces (Buchan, Pare, & Munhall,
2007). In categorization tasks such as gender recognition and expression detection, subjects were found to use different visual information from the same visual input depending upon task (Gosselin & Schyns,
2001). The
bubbles method has been used to reveal the diagnostic information employed in these categorization tasks. Essentially, observers are shown stimuli whose contrast is modulated by Gaussian windows of various sizes, distributed at random across the image. A record is kept of the locations and extent of the windows that led to accurate performance, thereby identifying locations on which discrimination performance depends. This method has now been used for the identification of critical regions for a great variety of categorization tasks, such as infant perceptual categorization (Humphreys, Gosselin, Schyns, & Johnson,
2006), perception of ambiguous figures (Bonnar, Gosselin, & Schyns,
2002), categorization of natural scenes (McCotter, Gosselin, Sowden, & Schyns,
2005), spatiotemporal dynamics of face recognition (Vinette, Gosselin, & Schyns,
2004), and even pigeons' visual discrimination behavior (Gibson, Wasserman, Gosselin, & Schyns,
2005). In the majority of these studies, the stimuli consisted of static images of faces, the tested subjects were human or animal, and the tasks were binary categorization tasks, i.e., “is the face male or female, expressive or non-expressive?”.