Visual event-related potential (ERP) studies, which measure transient changes in the brain's electrical activity that are time-locked to the presentation of an image, have also been used to examine category-specific responses. Studies using ERP have consistently found that faces elicit a negative potential around 150–200 ms (N170) post-stimulus onset that is maximal over occipitotemporal scalp regions (Bentin, Allison, Puce, Perez, & McCarthy,
1996; Rossion & Jacques,
2008). The “N170” is characterized by a larger amplitude and shorter latency in response to the presentation of faces versus nonface objects (Bentin et al.,
1996; Botzel, Schulze, & Stodieck,
1995; Carmel & Bentin,
2002; Eimer,
2000b; George, Evans, Fiori, Davidoff, & Renault,
1996; Itier & Taylor,
2004; Jeffreys,
1989; Rousselet, Husk, Bennett, & Sekuler,
2008). In addition, a face-inversion effect, comprised of a delayed and more negative N170 response to inverted faces but not to inverted objects, has been considered a marker for face-specific processing (Bentin et al.,
1996; Eimer,
2000a; Jacques, d'Arripe, & Rossion,
2007; Rossion et al.,
2000). In these studies, the defining signature of category-specificity has typically been differences in amplitude or timing of an ERP component between two stimulus categories, presumably reflecting differences in perceptual processing of structural information. However, it has been argued that the N170 may not constitute category-specific neural processing per se, as there are often differences in basic low-level properties within and between categories that contribute to the response (Johnson & Olshausen,
2003; Pernet, Schyns, & Demonet,
2007; Rousselet, Gaspar, Wieczorek, & Pernet,
2011; Rousselet, Husk, Bennett, & Sekuler,
2007; Rousselet et al.,
2008; Rousselet, Pernet, Caldara, & Schyns,
2011; VanRullen & Thorpe,
2001). Nevertheless, collectively, multiple lines of evidence support the conclusion that despite similar computational processing of individual local visual features, there exist qualitatively and quantitatively selective neural mechanisms for extracting higher-level structural information across image categories.