Free
Research Article  |   March 2007
Added noise affects the neural correlates of upright and inverted faces differently
Author Affiliations
  • Bethany L. Schneider
    Departments of Psychological and Brain Sciences and Cognitive Science, Indiana University, Bloomington, IN, USAbschneid@indiana.edu
  • Jordan E. DeLong
    Departments of Psychological and Brain Sciences and Cognitive Science, Indiana University, Bloomington, IN, USAjodelong@indiana.edu
  • Thomas A. Busey
    Departments of Psychological and Brain Sciences and Cognitive Science, Indiana University, Bloomington, IN, USAhttp://www.indiana.edu/~buseybusey@indiana.edu
Journal of Vision March 2007, Vol.7, 4. doi:https://doi.org/10.1167/7.4.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bethany L. Schneider, Jordan E. DeLong, Thomas A. Busey; Added noise affects the neural correlates of upright and inverted faces differently. Journal of Vision 2007;7(4):4. https://doi.org/10.1167/7.4.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In five experiments, we examine the neural correlates of the interaction between upright faces, inverted faces, and visual noise. In Experiment 1, we examine a component termed the N170 for upright and inverted faces presented with and without noise. Results show a smaller amplitude for inverted faces than upright faces when presented in noise, whereas the reverse is true without noise. In Experiment 2, we show that the amplitude reversal is robust for full faces but not eyes alone across all noise levels. In Experiment 3, we vary contrast to see if this reversal is a result of degrading a face. We observe no reversal effects. Thus, across conditions, adding noise to full faces is a sufficient condition for the N170 reversal. In Experiment 4, we delay the onsets of the faces presented in noise. We replicate the smaller N170 for inverted faces at no delay but observe partial recovery of the N170 for inverted faces at longer delays in static noise. Experiment 5 demonstrates the interaction in low contrast at a behavioral level. We propose a model in which noise interacts with the processing properties of inverted faces more so than upright faces.

Introduction
In this article, we examine how upright and inverted faces are processed differently by exploring the degree to which these two types of visual stimuli interact with a third stimulus set—visual noise. If noise interacts with the neural substrates of upright and inverted faces differently, the nature of the interaction, along with the spatial and temporal locus of the effects, places constraints on models of expertise and the development of configural processing. Within the face-processing literature, several different techniques have been used to suggest how upright and inverted faces might be treated differently by the visual system. Yin (1969) first established the concept of processing differences between upright and inverted faces by showing distinct behavioral advantages for upright faces as compared with inverted faces. He labeled this phenomenon the face inversion effect. Since this seminal work, the underlying theme throughout this literature seems to be that upright faces may be processed to some degree holistically in which the perception of an individual feature is affected by the context in which it is presented (see Maurer, Grand, & Mondloch, 2002; Rossion & Gauthier, 2002, for reviews). This literature has emphasized the importance of relational information for holistic processing (Tanaka & Farah, 1993; Thompson, 1980). While these effects are seen for upright faces, they decrease with face inversion and lead to the suggestion that upright faces may be processed differently than inverted faces (e.g., Farah, Wilson, Drain, & Tanaka, 1998; Itier & Taylor, 2004; McKone, Martini, & Nakayama, 2001, 2003; Rossion & Gauthier, 2002). This finding has been further explored and replicated in alternate behavioral paradigms such as old–new recognition tasks (Carey, Diamond, & Woods, 1980; Philips & Rawles, 1979; Scapinello & Yarmey, 1970) as well as two-alternative forced-choice tasks (Carey & Diamond, 1977; Leder & Bruce, 2000; Scapinello & Yarmey, 1970; Tanaka & Farah, 1993). However, it is worth noting that this view has been challenged and recent research has provided compelling arguments against processing differences between upright and inverted faces (see, e.g., Riesenhuber, Jarudi, Gilad, & Sinha, 2004; Sekuler, Gaspar, Gold, & Bennett, 2004; Yovel & Kanwisher, 2004). 
The face inversion effect has an electrophysiological correlate. A component known as the N170 is thought to represent “the late structural encoding stages of complex visual information processing” (Eimer, 2000). An upright face elicits a very strong N170 response that is thought to originate in parietal/temporal brain regions. This response is seen in both hemispheres but primarily in the right (Bentin, Allison, Puce, Perez, & McCarthy, 1996). The N170 for inverted faces, however, is reliably delayed and produces a larger amplitude than its upright face counterpart, an effect not seen in other types of stimuli (Eimer, 2000; Linkenkaer-Hansen et al., 1998). These latency and amplitude differences constitute the EEG correlates of the face inversion effect. Although many of the behavioral and electrophysiological effects for the face inversion effect have been reliably established, it remains unclear what produces these differences and what this tells us about how upright and inverted face processing differs. 
One way to address this question is to measure differences in brain responses by adding a third stimulus class such as visual noise. Such a procedure was adopted by Linkenkaer-Hansen et al. (1998) in which they presented both upright and inverted faces in either a no-noise condition or a high spatial frequency pixilated noise condition. In the no-noise condition, they replicate the classic face inversion effect: a larger and delayed N170 amplitude for inverted faces compared with upright faces. Intriguingly, they find that when noise patches are added, this relation reverses such that the upright face has a larger N170. However, the reliability and interpretation of this effect was not extensively pursued within their paper, which was more concerned with identifying the emergence of face selectivity in early perceptual processing. 
If this finding is true, it would demonstrate that noise affects the neural correlates of upright and inverted faces differently. Previous research has used parametric designs featuring visual noise on upright faces and shown decreased N170 amplitudes and increased latencies as function of the noise (Jemel et al., 2003) as well as similar M170 patterns (Tanskanen, Näsänen, Montez, Päällysaho, & Hari, 2005; Tarkiainen, Cornelissen, & Salmelin, 2002). These findings have further been correlated neuroanatomically with the bilateral fusiform gyrus and superior temporal gyrus (Horovitz, Rossion, Skudlarski, & Gore, 2004). However, by looking at both upright and inverted faces presented in noise, as done by Linkenkaer-Hansen et al. (1998), we address how noise affects upright and inverted faces differently. This provides a nice launching point to address questions of how neurons interact when processing upright and inverted faces within visual noise. One possible pattern is that upright faces are more prone to resistance to interference from other visual stimuli than their inverted face counterparts. These deferential effects could be an outgrowth of the development of configural processing. 
The goal of this article is to establish the reliability and domain of the Linkenkaer-Hansen et al. (1998) reversal phenomenon and then use different manipulations to draw theoretical conclusions about the properties of the neurons that respond to upright and inverted faces. To address these questions, we developed a paradigm in which we present upright and inverted faces in amplitude-matched noise. By using this controlled noise along with signal-to-noise ratios (SNRs) that keep the overall energy of the display constant while varying the amount of face information, we can isolate the interactions between the neural responses that process upright and inverted faces and those that process the noise from early differences seen in the visual system (e.g., V1 and V2). We rely on electrophysiological recording techniques for their excellent temporal acuity. 
To anticipate our initial results, we replicate, in Experiment 1, the Linkenkaer-Hansen et al. (1998) reversal effect. Experiments 2 and 3 are designed to probe the generality of this effect, whereas Experiment 4 tests the temporal dynamics of the reversal phenomenon. In Experiment 2, we address whether the reversal of the N170 pattern is specific to the SNR used in Experiment 1, as well as whether the reversal is specific to only full faces. In Experiment 3, we ask whether degrading a face by reducing its contrast produces the same effect. In Experiment 4, we use stimulus onset asynchronies (SOAs) to test the temporal dynamic of the reversal phenomenon. Experiment 5 examines the behavioral correlate of this interaction. 
Experiment 1
The goal of Experiment 1 is to replicate and extend the Linkenkaer-Hansen et al. (1998) finding and to investigate the effects of inversion on the N170 resulting from degrading a face by adding noise. 
Methods
Participants
Ten right-handed Indiana University undergraduates (of whom six were male) participated in the study. All had normal or corrected-to-normal vision, and their participation constituted part of their laboratory work or coursework. All were knowledgeable of the purpose and details of the experiment. The data of an additional participant were excluded due to a lack of N170 response to the face stimuli. 
Apparatus
The EEG was sampled at 32 channels at 1000 Hz and downsampled to 250 Hz. It was amplified by a factor of 20,000 (Sensorium amps) and low-pass filtered below 50 Hz. Signal recording sites included PO7 and PO8, with a nose reference and forehead ground ( Figure 1). All channels had below 5-kΩ impedance, and recording was done inside a Faraday cage. Data were analyzed using the EEGLab toolbox (Delorme & Makeig, 2004), which finds independent components that were readily identifiable as related to artifacts, such as eyeblinks, eye movements, and muscle artifacts, through independent component analysis. The first two types of artifacts were identified with the help of blink and eye-movement calibration trials at the beginning of the experiment as well as their topographical representation on the scalp. Components relating to muscle artifacts were identified by their high-frequency amplitude spectrum and topographical representation. We typically removed between three and eight components for each participant, which were subtracted from the raw EEG to eliminate the artifacts. 
Figure 1
 
Scalp channel locations. Channels PO7 and PO8 are in red.
Figure 1
 
Scalp channel locations. Channels PO7 and PO8 are in red.
Images were shown on a 21-in. (53.34-cm) Mitsubishi color monitor model THZ8155KL running at 120 Hz. Images were approximately 44 in. (112 cm) from the participant. 
Stimuli
A sample stimulus set appears in Figure 2: upright and inverted faces presented without noise or in high noise at low and high contrast levels. Stimuli consisted of grayscale frontal views of four faces with neutral expressions generated using a database of facial features. The two male faces were identical to each other with the exception of the eyes (one male face had what we designated as “female” eyes). The same procedure was done with the two female faces (one female face had “male” eyes). The faces subtended a visual angle of 3.6° face width and 4.8° face height. Both the upright and inverted face images were shown randomly across two contrast levels and in either a no-noise level or a moderate-noise level. We used two levels of contrast to ensure that the interaction between inversion and added noise is not due to a scaling effect. Stimulus contrast was determined by luminance minus background gray level over the background gray level (which, in our experiments, was 47.8 cd/m 2). We produced noise by scrambling the phase of the stimuli. Therefore, even with the addition of the noise, we still preserved the total energy in the display. Each noise presentation was randomly resampled on each trial and generated from the stimulus itself on all trials. We used SNRs that determined the amount of noise relative to the stimulus for each trial. These ratios range from SNR = 1 (no noise) to SNR = 0 (all noise). For the moderate-noise-level condition, we used an SNR of .43 at a contrast level of .75 for the bright condition and at a contrast level of .25 for the dim condition. In the no-noise condition (SNR = 1), the bright condition had a contrast level of .75 whereas the dim condition had a contrast level of .25. Because the noise was generated by shifting the phase of the spatial frequencies, the overall energy of the displays was constant across both the noise and no-noise condition. 
Figure 2
 
Sample stimuli for Experiment 1. Stimuli are presented either upright or inverted in both low and high contrast levels at two noise levels: no noise added and noise added.
Figure 2
 
Sample stimuli for Experiment 1. Stimuli are presented either upright or inverted in both low and high contrast levels at two noise levels: no noise added and noise added.
Procedure
The participants were given a cover task to maintain vigilance. They were allowed to view the various experimental stimuli to familiarize themselves with the two different sets of eyes (male and female). Once the experiment started, they were instructed to identify the eye-gender categories regardless of the surrounding facial features. The participants were told that a stimulus would appear for every trial in one of two different contrast levels as well as either with noise or no noise present. Also, they were told that they would hear differing audio feedback as to whether or not they had responded correctly. The participants were instructed to respond to the eye gender by keypress via a numeric keypad. Participants were able to freely view the images. Although there was no specified fixation point, all images appeared in the same centralized location for each trial. Participants were also instructed to limit both their body and eye movements while a stimulus was on the screen. 
Each stimulus was presented for 1,000 ms. EEG was recorded at 100 ms prior to stimulus onset to 1,100 ms poststimulus onset. We collapsed conditions across gender and eye match, yielding a total of eight conditions. We presented equal number of trials ( n = 112) per condition combination, for a total of 896 trials per experiment. 
Results and discussion
The main goal of Experiment 1 was to replicate the Linkenkaer-Hansen et al. (1998) crossover interaction between inversion and added noise in the N170. We analyzed channels PO8 (right parietal occipital) and PO7 (left parietal occipital; see Figure 1) due to the presence of a relatively strong N170 in those channels. 
ERP data
Results for Experiment 1 are shown in Figure 3. The presence of noise has an overall wave amplitude reduction compared with the no-noise condition. We define the size of the P1 by extracting the highest amplitude point that occurs around 100 ms. Therefore, larger P1s have more positive values associated with them because it is a positive-going wave. To find the participant's P1 amplitude, we extracted the local amplitude maximum between 110 and 145 ms. We defined the size of the N170 by similar methods as the P1: by extracting the lowest amplitude point that occurs around 170 ms. Therefore, larger N170s have more negative values associated with them because it is a negative-going wave. To find the participant's N170 amplitude value, we extracted the local amplitude minimum between 150 and 225 ms. We also determined the N170 latency value from that local minimum point. Amplitude values are displayed in Table 1. The EEG signals were low-pass filtered below 20 Hz in every experiment to make the latencies and amplitudes more stable. This could cause a discrepancy between the values reported in the tables versus those displayed on the graphs. However, the general ordering of the amplitudes is still preserved across the two. 
Table 1
 
Amplitude values for components P1 and N170 across conditions.
Table 1
 
Amplitude values for components P1 and N170 across conditions.
P1 amplitude No noise Noise
Upright Inverted Upright Inverted
High contrast
PO7 7.117 (1.249) 9.185 (1.436) 9.479 (1.539) 11.047 (1.577)
PO8 8.503 (2.042) 9.691 (2.037) 10.939 (2.332) 12.170 (2.343)

Low contrast
PO7 7.056 (1.299) 8.668 (1.325) 8.736 (1.159) 10.117 (1.274)
PO8 8.061 (1.866) 9.168 (1.866) 10.190 (1.833) 10.896 (1.888)

N170 amplitude No noise Noise
Upright Inverted Upright Inverted
High contrast
PO7 −1.059 (1.591) −2.618 (1.372) 2.319 (1.566) 3.467 (1.354)
PO8 −1.472 (1.476) −4.197 (1.554) 2.317 (1.788) 3.022 (1.951)

Low contrast
PO7 −1.209 (1.353) −2.334 (1.513) 1.542 (1.466) 3.166 (1.355)
PO8 −1.687 (1.052) −3.982 (1.792) 1.642 (1.770) 2.270 (1.788)
Figure 3a, Figure 3b, Figure 3c, Figure 3d
 
Data from Experiment 3. Top panels refer to upright and inverted faces presented in high contrast level in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted faces presented in low contrast level in channels (c) PO7 and (d) PO8.
Figure 3a, Figure 3b, Figure 3c, Figure 3d
 
Data from Experiment 3. Top panels refer to upright and inverted faces presented in high contrast level in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted faces presented in low contrast level in channels (c) PO7 and (d) PO8.
We ran a repeated measures analysis on amplitude and showed a significant main effect of noise in regard to amplitude in the PO7 and PO8, F(1, 9) = 32.47, p < .001 and F(1, 9) = 28.632, p < .001, respectively. We also replicated the Linkenkaer-Hansen et al. (1998) finding and showed a reversal of the standard face inversion effect: The N170 component has a smaller amplitude for inverted faces compared with its upright counterparts. This reversal effect provides a significant interaction between noise and rotation in the PO7 and PO8, F(1, 9) = 31.574, p < .001 and F(1, 9) = 18.071, p = .002, respectively. There was no significant main effect of contrast for either the PO7 or the PO8, F(1, 9) = 0.437, p = .525 and F(1, 9) = 0.655, p = .439, respectively, and therefore, the two contrast levels serve as a within-experiment replication. Throughout our statistical analyses, we chose an α level of .05 for our criterion for all tests and only discuss level of significance based upon whether or not our α level falls below .05. 
Latency results show increased time onset when a face is degraded but no interaction between added noise and inversion. Whereas the main effect of noise on latency was significant in both the PO7 and PO8, F(1, 9) = 13.195, p = .005 and F(1, 9) = 15.331, p = .004, respectively, the interaction between noise and rotation was not significant, PO7: F(1, 9) = 1.139, p = .314 and PO8: F(1, 9) = 0.007, p = .936. While noise does seem to affect latency as well as amplitude, the effects on amplitude appear to be much more robust and pronounced. Because we are more focused upon the amplitude reversal as seen in the Linkenkaer-Hansen et al. (1998) experiment, we will be focusing primarily upon an analysis of the amplitude for each subsequent experiment. 
Although we are primarily interested in the N170 wave patterns, we also analyzed the P1 via a repeated measures ANOVA to further investigate the effects of our experimental manipulations. The addition of noise significantly increases the P1 as shown in the main effect of noise in both the PO7 and PO8, F(1, 9) = 16.489, p = .003 and F(1, 9) = 16.389, p = .003, respectively. Interestingly, unlike the addition of noise, contrast did not have a significant impact on the overall size of the P1 in either the PO7 or the PO8. This is supported by the lack of significance of the main effect of contrast in both the PO7 and PO8, F(1, 9) = 2.103, p = .181 and F(1, 9) = 1.729, p = .221, respectively. The interaction between noise and rotation also lacked significance for the P1 in both the PO7 and PO8, F(1, 9) < 1 and F(1, 9) < 1, respectively. 
Because we get such robust effects of noise on the P1, to ensure our N170 results were illustrating the noise manipulation and not P1 differences, we equated the N170 for each condition in terms of its P1 value by subtracting the mean N170 value from the mean P1 value for each condition. After such adjustment, our analysis of the variable manipulations still proved similar to our original analysis of the N170. Noise still seems to have a significant debilitating effect on the overall size of the wave in both the PO7 and PO8, F(1, 9) = 7.369, p = .024 and F(1, 9) = 13.57, p = .005. The main effect of brightness also continued to lack significance in both the PO7 and PO8, F(1, 9) = 2.067, p = .184 and F(1, 9) = 2.115, p = .180, respectively. The interaction between noise and rotation also remained significant in PO7 and PO8 after the adjustment, F(1, 9) = 15.842, p = .003 and F(1, 9) = 14.407, p = .004, respectively, which shows that our N170 effects are indeed due to the noise and contrast manipulations and not merely to differences in the P1s. 
Like the N170, when analyzing the P1 in terms of latency, we see an increased latency when noise is present but no interaction between noise and rotation. This is further illustrated in our ANOVA. There is a significant main effect of noise in both the PO7 and PO8, F(1, 9) = 16.871, p = .003 and F(1, 9) = 55.550, p = .000, respectively. The interaction between noise and rotation, however, is only significant on the P1 in the PO7, F(1, 9) = 8.551, p = .017, but not in the PO8, F(1, 9) = 3.945, p = .078. Although noise does affect both the latency and amplitude of the P1, we are primarily focused on the effects of amplitude under our various manipulations. Therefore, we will continue to focus solely on the amplitude values from this point forth in this article. However, this does not negate the importance of latency in investigating properties of object or face perception. 
An increase in time jittering of the N170 between responses could potentially decrease the amplitude for inverted faces presented in noise. In other words, a greater variability between N170 latency values for trials in which inverted faces are presented in noise and those for trials in which inverted faces are presented without noise could cause the overall averaged N170 to widen and decrease. To test for this, we derived the maximum N170 amplitude value within a window of 120–200 ms and calculated the variance across all trials per condition, per participant. After analyzing these results via within-subject repeated measures ANOVA, we see an overall main effect of noise, F(1, 13) = 14.391, p = .002, and a main effect of contrast, F(1, 13) = 21.263, p < .001. However, we see no significant differences between the variances for inverted faces presented in noise versus inverted faces presented without noise in either contrast level. This is shown in the three-way interaction between rotation, contrast, and noise, F(1, 13) < 1. Therefore, time jittering cannot explain why we see a decreased N170 for inverted faces presented in noise versus no noise. 
These results replicate two findings in the literature: (1) the electrophysiological face inversion effect finding in which the inverted face yields a larger amplitude than the upright face when no noise is present and (2) the Linkenkaer-Hansen et al. (1998) finding in which there is a reversal in the N170 wave pattern with the addition of noise in that the inverted face now yields a smaller amplitude than the upright face. 
Behavioral data
Although participants performed a behavioral component with the electrophysiological recordings, it was intended to keep attention on the stimuli and was not designed to investigate the role of noise or stimulus degrading to upright and inverted faces. It was designed to be neutral to inversion while still allowing participants to be able to respond at different levels of contrast and SNR. Due to this, in Experiment 5, we performed a behavioral experiment more aptly suited to investigating the effects of noise on stimulus type and report these results in a later section. However, for completeness, we report all the behavioral data for each experiment. 
The presence of noise caused an overall decrease in accuracy ( M = .750, SD = .034) when compared with the no-noise condition ( M = .891, SD = .030), as represented by the significance of the main effect of noise via a repeated measures ANOVA, F(1, 9) = 55.345. However, performance was not significantly disadvantaged between the low-contrast condition ( M = .819, SD = .035) and the high-contrast condition ( M = .821, SD = .027), and there was no significant main effect of brightness, F(1, 9) < 1. This closely correlates with the lack of significance for the main effect of brightness in the EEG analysis. Accuracy differences between upright faces ( M = .814, SD = .028) and inverted faces ( M = .827, SD = .035) were not significant, F(1, 9) = 0.377, p = .377. However, we do see a trend level significance for the interaction between rotation and noise, F(1, 9) = 4.902, p = .054 ( Table 2). 
Table 2
 
Accuracy across conditions for Experiment 1.
Table 2
 
Accuracy across conditions for Experiment 1.
Accuracy across conditions
No noise Noise
Upright .893 (.025) .734 (.034)
Inverted .888 (.036) .765 (.036)
Experiment 2 tests the breadth of the reversal effects and determines whether these effects are specific to full faces and the noise level used in Experiment 1
Experiment 2
The goal of Experiment 2 is to determine whether the reversal effects seen in Experiment 1 are robust across multiple SNRs and whether these reversal effects are seen only in full faces. To test this latter hypothesis, we will introduce an eyes-alone condition. Inspiration for this condition is found in the literature that suggests that upright full faces are processed to some degree holistically (see Maurer et al., 2002; Rossion & Gauthier, 2002, for reviews). If this is true, then we may disrupt this holistic processing and alter the relationship between inversion and noise by presenting an eyes-alone condition. 
Methods
Participants
Sixteen right-handed observers (of whom four were male) participated in the experiment. One participant was excluded from the combined analysis due to an unpronounced N170 component with the face stimuli. 
Stimuli
A sample stimulus set appears in Figure 4. The same faces as in Experiment 1 were used in this experiment. A separate condition consisting of an eyes-alone stimulus was also presented randomly at upright and inverted orientations. Four specific SNRs were generated using the same methods as described in Experiment 1. SNRs include, from least to most noise, SNR = .6, SNR = .48, SNR = .38, and SNR = .3. All stimuli were presented at a contrast level of .31. By collapsing across gender and eye-match congruency, we used a total of 16 conditions with equal trials per condition (56 trials), for a total of 896 trials per experiment. 
Figure 4a, Figure 4b
 
Sample stimuli for Experiment 2 from lowest noise (SNR = .6) to highest noise (SNR = .3).
Figure 4a, Figure 4b
 
Sample stimuli for Experiment 2 from lowest noise (SNR = .6) to highest noise (SNR = .3).
Procedure
Procedures were identical to those of Experiment 1
Results and discussion
ERP data
Results for Experiment 2 are seen in Figure 5 and Table 3. ANOVA analysis on amplitudes yields a significant main effect of eyes alone versus full face in the N170 for both the PO7 and PO8, F(1, 15) = 16.511, p = .001 and F(1, 15) = 11.374, p = .004, respectively. As in Experiment 1, we obtained our amplitude values for the P1 by extracting the maximum amplitude within the window of 80–150 ms and the N170 by extracting the minimum amplitude within the window of 150–225 ms. Analysis on the P1 also shows a significant main effect of eyes alone versus full face in both PO7 and PO8, F(1, 15) = 35.675, p = .000 and F(1, 15) = 13.558, p = .002, respectively. However, when the N170 is adjusted for P1 differences, this significance ceases in both the PO7 and PO8, F(1, 15) < 1 and F(1, 15) = 1.405, p = .254, respectively. When looking at the amplitude values represented in Figure 5 and Table 3 though, we see that the relative distance between the P1 peak and the N170 trough is similar for both the eyes-alone and full-face condition. The difference is in the size of the P1 for each (the eyes-alone condition has a much smaller P1 than N170). 
Table 3
 
Amplitude values across conditions for components P1 and N170.
Table 3
 
Amplitude values across conditions for components P1 and N170.
P1 N170
Upright Inverted Upright Inverted
Full face (PO7)
SNR = .30 11.981 (1.378) 11.813 (1.158) 6.079 (1.680) 6.460 (1.383)
SNR = .38 12.200 (1.200) 11.546 (1.493) 3.827 (1.503) 5.618 (1.546)
SNR = .48 10.808 (1.400) 11.959 (1.343) 1.060 (1.616) 2.997 (1.524)
SNR = .60 11.162 (1.493) 11.908 (1.426) −0.00474 (1.875) 1.437 (1.776)
Full face (PO8)
SNR = .30 13.010 (1.788) 12.327 (1.771) 6.438 (1.884) 6.684 (1.872)
SNR = .38 12.484 (1.707) 12.593 (1.870) 2.434 (1.602) 5.022 (1.702)
SNR = .48 11.725 (1.982) 13.405 (1.745) −2.317 (1.608) 1.008 (1.598)
SNR = .60 12.715 (1.877) 12.948 (1.947) −3.195 (1.919) −1.372 (1.703)

Eyes alone (PO7)
SNR = .30 10.088 (1.163) 9.617 (1.156) 4.326 (1.430) 3.871 (1.495)
SNR = .38 10.414 (1.169) 8.739 (1.202) 2.102 (1.656) 1.262 (1.675)
SNR = .48 9.802 (1.026) 8.247 (1.152) 1.201 (1.548) −1.083 (1.515)
SNR = .60 9.116 (1.295) 7.457 (1.218) −1.938 (1.389) −4.124 (2.024)

Eyes alone (PO8)
SNR = .30 11.558 (1.503) 10.061 (1.337) 2.854 (1.512) 1.975 (1.294)
SNR = .38 11.306 (1.270) 9.600 (1.345) −.221 (1.726) −1.033 (1.539)
SNR = .48 9.898 (1.082) 9.114 (1.067) −1.834 (1.744) −4.159 (1.721)
SNR = .60 9.755 (1.252) 8.039 (1.331) −5.345 (1.781) −6.641 (1.892)
Figure 5a, Figure 5b, Figure 5c, Figure 5d
 
Data from Experiment 2. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 5a, Figure 5b, Figure 5c, Figure 5d
 
Data from Experiment 2. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
ERP: Full faces
Analysis of the full-face and eyes-alone conditions separately yields a significant main effect for noise for the PO7 and PO8 for full faces in the N170, F(3, 45) = 17.825, p < .001 and F(3, 45) = 32.283, p < .001, respectively, but not the P1 in either the PO7 or the PO8, both F(3, 45) < 1. Again, this reflects the fact that the N170 (not the P1) is affected and reduced when a face is presented in various levels of noise. The fact that we control the overall energy across the noise levels explains the lack of significance for the P1. The main effect of rotation on the N170 is also significant in channels PO7, F(1, 15) = 8.583, p = .010, and PO8, F(1, 15) = 21.612, p < .001, for full faces but not in the P1 in either channel PO7 or channel PO8, F(1, 15) < 1 and F(1, 15) = 1.435, p = .250, respectively. Because we lack a no-noise condition in this experiment, we may not expect an interaction between noise and rotation and, in fact, the interaction between noise and rotation proves to be nonsignificant in either the N170 or the P1 for channels PO7 and PO8 for the full-face condition, N170: F(3, 45) = 0.824, p = .488, F(3, 45) = 2.585, p = .065, and P1: F(3, 45) = 1.380, p = .261 and F(3, 45) = 1.304, p = .285, respectively. 
Due to the fact that we see no significant differences in the P1 for the given variables, it is logical that we retain our significant main effects in the N170 even after adjusting for the P1 values via subtracting the N170 means from the P1 means in each condition. The main effect of noise remains significant in both the PO7 and PO8, F(3, 45) = 13.236, p = .000 and F(3, 45) = 29.531, p = .000, respectively. Also, the main effect of rotation remains significant in channels PO7, F(1, 15) = 4.594, p =.049, and PO8, F(1, 15) = 10.508, p = .005. We also would not expect the interaction between noise and rotation to be significant after adjustment for the same reason as stated above: lack of no-noise condition. Indeed, this is still the case in channels PO7 and PO8, F(3, 45) = 1.305, p = .285 and F(3, 45) < 1, respectively. 
ERP: Eyes alone
Unlike the full-face condition, which shows a significant main effect of noise solely in the N170, analysis of the eyes-alone condition shows a significant main effect of noise in channels PO7 and PO8 for both the P1, F(3, 45) = 5.334, p = .003 and F(3, 45) = 12.277, p = .000, respectively, and the N170, F(3, 45) = 34.664, p < .001 and F(3, 45) = 45.695, p < .001, respectively. Also, unlike the full-face condition, we see a significant main effect of rotation in both the P1 and N170 for channels PO7, F(1, 15) = 17.946, p = .001 and F(1, 15) = 13.651, p = .002, respectively, and PO8, F(1, 15) = 6.123, p = .026 and F(1, 15) = 10.019, p = .006, respectively. Also, like the full-face condition, the eyes-alone condition shows no significant interaction effects between rotation and noise in either channel PO7 or channel PO8 for the P1, F(3, 45) < 1 and F(3, 45) < 1, respectively, or the N170, F(3, 45) = 1.333, p = .276 and F(3, 45) = 1.064, p = .374, respectively. 
When we adjust the N170 for the P1 in the eyes-alone condition, we still see a significant main effect of noise in both the PO7, F(3, 45) = 21.052, p = .000, and PO8, F(3, 45) = 20.824, p = .000. However, our significant main effect of rotation is not preserved in the PO7, F(1, 15) < 1, or PO8, F(1, 15) < 1. This is comparable with our subsequent eyes-alone/noise manipulation. The interaction between noise and rotation also remains insignificant after the N170 adjustment in both the PO7, F(3, 45) = 1.038, p = .385, and PO8, F(3, 45) = 1.470, p = .236. 
To ensure that the eyes were equally masked in the eyes-alone condition as compared with the full-face condition, we ran an additional control experiment in which we solely presented eyes alone at both rotations and at a contrast of .245 and two moderate-noise levels (SNR = .38 and SNR = .48) that were generated by randomly sampling a full-face condition. This noise was then imposed on the eyes-alone conditions. When analyzing this new modification, we see the differences in P1 collapse, as shown by the lack of significance in the main effect of noise for the P1 in channel PO7 or PO8, F(1, 6) = 1.034, p = .349 and F(1, 6) = 0.157, p = .705, respectively. We do still see a significant main effect of noise in the N170 for channels PO7 and PO8, F(1, 6) = 6.258, p = .046 and F(1, 6) = 14.969, p = .008, respectively. We also see the main effect of rotation disappear in both the P1 and N170 in channels PO7, F(1, 6) = 0.001, p = .972 and F(1, 6) = 0.166, p = .698, respectively, and PO8, F(1, 6) = 0.023, p = .884 and F(1, 6) = 0.385, p = .558, respectively. Similar to the previous eyes-alone manipulation, we do not see a significant interaction between noise and rotation in either the P1 or the N170 in channel PO7, F(1, 6) = 0.527, p = .495 and F(1, 6) = 0.011, p = .921, respectively, or PO8, F(1, 6) = 0.334, p = .584 and F(1, 6) = 0.542, p = .489, respectively. The fact that we do not see significant differences in rotation for the eyes-alone condition supports the amplitude reversal we see in full faces. It suggests that noise selectively affects full faces rather than eyes alone. In other words, noise does not elicit amplitude differences between upright and inverted eyes. However, this will be further examined in the subsequent experiment in which we manipulate contrast levels. 
Behavioral data
Behavioral data follow our EEG correlate with accuracy declining as a function of SNRs, rotation, and whether they were identifying full faces versus the eyes-alone conditions. Performance decreased as a function of increased levels of noise ( M = .751, SD = .025; M = .724, SD = .021; M = .692, SD = .018; M = .637, SD = .015). This was illustrated by the significant main effect of SNRs on accuracy, F(3, 45) = 21.697, p < .001. The main effect of rotation was also significant, F(1, 15) = 6.344, p = .024, showing that inversion hindered participants' performance (upright faces: M = .713, SD = .019; inverted faces: M = .689, SD = .017). Participants seemed to be hindered by the addition of features in the full-face condition ( M = .629, SD = .019) when compared to their performance in the eyes-alone condition ( M = .773, SD = .019). This is further illustrated by the significant main effect of full-face versus eyes-alone condition, F(1, 15) = 115.386, p < .001. In addition to these main effects, we also see a significant interaction between rotation and whether they were identifying full faces versus eyes alone, F(1, 15) = 6.519, p = .022 ( Table 4). Similarly with EEG results, we do not, however, see a significant interaction between SNRs and rotation, F(3, 45) = 2.268, p = .093, or a three-way interaction between SNR, rotation, and eyes alone versus full faces, F(3, 45) = 0.772, p = .516. 
Table 4
 
Behavioral data for Experiment 2.
Table 4
 
Behavioral data for Experiment 2.
Interaction between rotation and stimulus type
Eyes alone Full face
Upright .798 (.020) .628 (.021)
Inverted .749 (.020) .630 (.019)
These results demonstrate that the neuronal properties responsible for the amplitude reversal are robust across all noise levels and only for full faces. 
Experiment 3
The goal of Experiment 3 is to determine whether merely degrading the face causes the amplitude reversal or if this effect is specific to adding noise. Faces were degraded using different contrast levels as well as through the removal of the external features as in Experiment 2. If the effects are specific to noise, then we should expect to see the standard ordering of the N170 for upright and inverted faces. However, if degrading the face through contrast reduction produces the reversal effects, then the property of degradation is responsible for the reversal of the standard N170 ordering for upright and inverted faces. The effects of partial versus full faces are also examined to explore the interaction between contrast-based degradation and holistic processing. 
Methods
Participants
Twenty-three right-handed observers (of whom nine were male) participated in the experiment. 
Stimuli
A sample stimulus set appears in Figure 6. The same faces and eyes-alone stimuli as in Experiment 2 were used in this experiment. However, rather than varying SNRs, each face was presented at one of four different contrast levels generated using the same technique as in Experiment 1. From lowest to highest contrast, these levels were .05, .31, .57, and .83. To remain consistent with previous experiments, we collapsed across gender and eye-match congruency to give a total of 16 conditions. Each condition was presented equally (56 trials), for a total of 896 trials per experiment. 
Figure 6a, Figure 6b
 
Sample stimuli from Experiment 3 from highest contrast (.83) to lowest contrast (.05).
Figure 6a, Figure 6b
 
Sample stimuli from Experiment 3 from highest contrast (.83) to lowest contrast (.05).
Procedure
The procedures that were followed were identical to those of Experiment 2 with the exception of the contrast conditions that replace the SNR manipulation. 
Results and discussion
ERP data
The results for Experiment 3 are seen in Figure 7 and Table 5. We see an overall reduction in amplitude for eyes alone supported by a repeated measures ANOVA analysis of the P1 for the main effects of eyes alone versus full faces showing significance in channels PO7, F(1, 22) = 42.022, p = .000, and PO8, F(1, 22) = 19.502, p = .000. This pattern is also replicated in the N170 in channels PO7, F(1, 22) = 15.449, p = .001, and PO8, F(1, 22) = 19.502, p < .001. However, when we adjust the N170 in terms of the P1, this main effect of full face versus eyes alone turns insignificant in both the PO7, F(1, 22) = 2.648, p = .118, and PO8, F(1, 22) < 1. Maximum amplitudes for the P1 were extracted from each participant within a window of 80–150 ms. Minimum amplitudes for the N170 were extracted from each participant within a window of 150–225 ms. These values were used in the ANOVA analysis. 
Table 5
 
Amplitude values across conditions for components P1 and N170.
Table 5
 
Amplitude values across conditions for components P1 and N170.
P1 N170
Upright Inverted Upright Inverted
Full face (PO7)
Contrast = .05 6.275 (0.745) 5.537 (0.690) −1.332 (1.225) −2.650 (1.301)
Contrast = .12 8.009 (0.881) 8.091 (0.709) −2.106 (1.162) −3.956 (1.420)
Contrast = .31 9.302 (0.947) 10.109 (0.911) −1.012 (1.119) −3.365 (1.498)
Contrast = .83 9.684 (1.045) 11.695 (1.238) −1.373 (1.166) −2.467 (1.350)

Full face (PO8)
Contrast = .05 6.630 (0.883) 5.975 (0.881) −2.078 (1.153) −3.564 (1.255)
Contrast = .12 7.391 (0.878) 8.532 (0.643) −3.113 (1.162) −4.745 (1.407)
Contrast = .31 8.501 (0.878) 9.889 (0.926) −2.408 (0.986) −5.024 (1.357)
Contrast = .83 8.700 (0.925) 10.801 (1.195) −2.714 (1.077) −4.155 (1.366)

Eyes alone (PO7)
Contrast = .05 3.761 (0.530) 3.029 (0.515) −2.103 (1.131) −2.876 (1.092)
Contrast = .12 5.481 (0.745) 5.282 (0.688) −3.771 (1.582) −4.429 (1.507)
Contrast = .31 7.763 (0.722) 6.306 (0.758) −4.075 (1.502) −4.731 (1.393)
Contrast = .83 8.587 (0.989) 7.696 (0.838) −4.996 (1.540) −4.772 (1.299)

Eyes alone (PO8)
Contrast = .05 3.292 (0.525) 2.603 (0.432 −2.790 (0.947) −4.117 (0.988)
Contrast = .12 5.135 (0.841) 4.969 (0.662) −5.590 (1.356) −6.274 (1.271)
Contrast = .31 7.161 (0.734) 5.714 (0.619) −6.676 (1.231) −6.525 (1.250)
Contrast = .83 8.559 (0.966) 7.046 (0.797) −7.418 (1.430) −6.741 (1.235)
Figure 7a, Figure 7b, Figure 7c, Figure 7d
 
Data from Experiment 3. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 7a, Figure 7b, Figure 7c, Figure 7d
 
Data from Experiment 3. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
ERP: Full faces
When separating the two conditions, we see only the standard ordering of the N170 for upright and inverted faces in the full-face condition across all contrast levels. The main effect of rotation for full faces yields significant results in the amplitude differences in the P1 in both the PO7 and PO8, F(1, 22) = 5.404, p = .030 and F(1, 22) = 17.173, p < .001, respectively. We also see a significant main effect of rotation in the N170 in channels PO7 and PO8, F(1, 22) = 8.102, p = .009 and F(1, 22) = 5.176, p = .033, respectively, shown as an increase in the N170 amplitude with face inversion. A repeated measures analysis shows a significant main effect of contrast in the P1 for both the PO7 and PO8, F(3, 66) = 37.791, p = .000 and F(3, 66) = 14.261, p = .000, respectively. This possibly reflects the low-level changes in overall stimulus energy with the decreasing contrast levels. This finding is supported by previous research by Avidan et al. (2002) who varied contrast levels of face (and other) stimuli and found large contrast variance in the early visual-processing stages. These invariances were corrected and maintained throughout the later processing stages (starting at LOC). It seems reasonable to assume that the P1 differences are reflecting these invariances. However, we will not pursue this in this article. 
When analyzing the N170 amplitudes, we see a significant main effect for contrast in the PO7, F(3, 66) = 2.812, p = .046, but not in the PO8, F(3, 66) = 1.443, p = .238, reflecting a relative decrease in the N170 amplitude in the PO7 with the degradation of the face via contrast manipulations. The interaction between these two factors, contrast and rotation for full faces, for both the PO7 and PO8 is also significant in the P1, F(3, 66) = 4.319, p = .008 and F(3, 66) = 4.574, p = .006, respectively, but insignificant in the N170, F(3, 66) = 1.545, p = .211 and F(3, 66) = 1.143, p = .338, respectively. The degradation of the face due to contrast has no significant impact upon the rotation ordering of the amplitude in the N170 brainwave. 
When we adjust the N170 in terms of the P1, we retain our significant main effect of rotation in channels PO7, F(1, 22) = 13.132, p = .002, and PO8, F(1, 22) = 12.618, p = .002. However, after the adjustment, we see a significant main effect of brightness for both the PO7 and PO8, F(3, 66) = 21.541, p = .000 and F(3, 66) = 14.668, p = .000, respectively. We also see the interaction between brightness and rotation to now be significant in channels PO7, F(3, 66) = 5.352, p = .002, and PO8, F(3, 66) = 7.281, p = .002. As shown in Figure 7, decreasing contrast levels yields smaller differences between the P1 for upright and inverted full faces. This could lead to an overall interaction between rotation and contrast, but we still do not see an amplitude ordering reversal at any contrast level as we do in the noise manipulations. 
ERP: Eyes alone
When analyzing the eyes-alone condition, we see main effects of contrast for both the PO7 and PO8 in the P1, F(3, 66) = 36.688, p = .000 and F(3, 66) = 42.593, p = .000, respectively, and in the N170, F(3, 66) = 4.754, p = .005 and F(3, 66) = 11.685, p < .001, respectively. Similar to the full-face condition, the eyes-alone condition shows a significant main effect of rotation in the P1 in both the PO7, F(1, 22) = 4.782, p = .040, and PO8, F(1, 22) = 7.691, p = .011. However, unlike the full-face condition, there is no significant main effect of rotation in the N170 for either channel PO7, F(1, 22) = 2.02, p = .169, or channel PO8, F(1, 22) = 0.664, p = .424, further illustrating the lack of amplitude difference for either upright or inverted eyes. There is no significant interaction between contrast and rotation in the eyes-alone condition in the P1 in either channel PO7, F(3, 66) < 1, or channel PO8, F(3, 66) = 1.322, p = .275. When analyzing the N170, the interaction between contrast and rotation for the eyes-alone condition is not significant in either the PO7 or the PO8, F(3, 66) = 0.883, p > .05 and F(3.66) = 2.650, p > .05, respectively. 
When we adjust the N170 in regard to the P1, we continue to see a significant main effect for contrast in channels PO7, F(3, 66) = 35.138, p = .000, and PO8, F(3, 66) = 52.417, p = .000. We also continue to see no main effect for rotation in either PO7 or PO8, F(1, 22) = 1.901, p = .182 and F(1, 22) = 3.393, p = .079, respectively. This is consistent with recent work by Itier, Latinus, and Taylor (2006) in which they found no differences in N170 amplitudes to eyes alone presented at either upright or inverted rotations. The eyes-alone data of Experiment 3 are also comparable to the findings in Experiment 2 in which we manipulated noise levels. It would thus appear that (1) upright and inverted eyes yield similar amplitude patterns and (2) noise does not affect these patterns. We show the interaction between contrast and rotation to remain insignificant in channel PO7, F(3, 66) = 2.010, p = .121, but become significant now in channel PO8, F(3, 66) = 10.461, p = .000. 
Behavioral data
We omitted one participant's data due to instructional misunderstanding and significantly low accuracy values. 
Similar to Experiment 2, rotation, full face versus eyes alone, and degradation of a face (this instance through contrast) all negatively affect accuracy at identifying the eye gender. Performance decreased as the images appeared at lower contrasts ( M = .834, SD = .028; M = .818, SD = .028; M = .793, SD = .028; M = .670, SD = .026) as illustrated by the significance of the main effect of contrast on accuracy, F(3, 63) = 43.396, p < .001. Accuracy also decreased for both eyes-alone and full-face conditions when the stimuli were presented inverted ( M = .765, SD = .025) than when they were presented upright ( M = .792, SD = .026), and the main effect of rotation reached a significance of F(1, 21) = 9.816, p = .005. Also similar with Experiment 2 is that fact that participants performed at higher accuracy levels when they were identifying eye gender in the eyes-alone condition ( M = .803, SD = .026) as opposed to the full-face condition ( M = .754, SD = .028), with a significant main effect of stimulus type, F(1, 21) = 7.439, p = .013. 
The interaction between contrast and full face versus eyes alone ( Table 6) was also significant, F(3, 63) = 7.144, p < .001, illustrating that decreasing contrast had a more significant effect on full faces as opposed to the eyes-alone condition. However, neither the interaction between contrast and rotation nor the three-way interaction between contrast, rotation, and condition type (eyes alone versus full face) proved significant, F(3, 63) = 2.441, p = .072 and F(3, 63) = .254, p = .858, respectively. These behavioral results also follow the EEG correlates in terms of significance of main effects and interactions. 
Table 6
 
Behavioral data for Experiment 3.
Table 6
 
Behavioral data for Experiment 3.
Interaction between contrast levels and stimulus type
Contrast = .05 Contrast = .31 Contrast = .57 Contrast = .83
Eyes alone .672 (.028) .807 (.030) .858 (.028) .875 (.029)
Full face .668 (.027) .779 (.030) .777 (.030) .793 (.033)
Summary of Experiments 1, 2, and 3
Across the three experiments, our results show that the N170 amplitude reversal effects for upright and inverted faces originally seen in Linkenkaer-Hansen et al. (1998) are reliable and robust across multiple noise levels (Experiments 1 and 2). We have also shown that the reversal effect is not caused by merely degrading a face and does not occur for eyes alone (Experiment 3); it requires the addition of noise to full faces. Therefore, it appears that there is a specific interaction between processing noise with full, inverted faces. 
We can further support our claims that the reversal is specific to the addition of noise and not due to degradation by showing that the differences achieved between the SNR and contrast manipulations are not simply due to differences in salience. To equate the two variables, we follow the proposal by Wichmann, Braun, and Gegenfurtner (2006) that by generating many different images embedded in noise and averaging these images, we can reveal the degree to which the noise produces an effective contrast reduction of the image. This occurs because the noise cancels but, in the process, also tends to reduce the contrast of the image. By computing the effective contrast of the averaged image, we can place the contrast manipulation and SNR manipulation on the same scale to show that the effects of noise are above and beyond what one would have expected if contrast reduction were the only result of adding noise. As shown in Table 7, we can convert each SNR value used in Experiment 2 to an equivalent contrast level that is directly comparable with the values used in Experiment 3
Table 7
 
Psychometric equivalence between SNR and contrast.
Table 7
 
Psychometric equivalence between SNR and contrast.
Experiment 2 SNR values Corresponding contrast levels
.6 0.169797
.48 0.112316
.38 0.066534
.3 0.037559
By comparing these derived contrast values with those from Experiment 3, we see that there are several SNR values that produce similar contrast values. Thus, the effect of noise is producing a reduction in visibility that is approximately equal to that produced by changing contrast, and yet, we see qualitatively different results when we add noise. Thus, we conclude that the effect of added noise is above and beyond that which we would predict if it just reduced the visibility of the face. 
Having established that the presence of noise in full faces is necessary and sufficient to elicit the N170 reversal, we can examine the dynamics of this interaction by splitting the temporal onsets of the two stimuli. This manipulation can establish possible temporal constraints of the interaction. In addition, separating the onsets of the two stimuli (faces and noise) may provide a theoretical foundation for possible neuronal interactions, which might account for the N170 reversal (in which the inverted faces have a smaller amplitude value than their upright counterparts). We outline this procedure in Experiment 4
Experiment 4
The goal of Experiment 4 is to explore the dynamics of the interaction between noise and inversion by separating the onsets of the noise and the faces. By using four different SOAs, we can examine the dynamics of processing differences between upright and inverted faces when presented with a third stimulus set. When the faces are presented simultaneously (SOA = 0 ms), we expect to replicate our reversal findings: The inverted face will yield a smaller amplitude than the upright face. If temporal constraints (simultaneous presentation) are necessary for the reversal, then we should see a selective recovery of the N170 associated with the inverted faces as the interaction between the two stimuli wanes in strength. However, if we do not see this selective recovery until later SOAs, then we can begin to speculate on the relative strength of this interaction caused by the presence of noise. 
Due to the unknown timing of the dynamics of this interaction, we used three SOAs (300, 450, and 600 ms) in addition to the simultaneous onset condition of SOA = 0 ms. The noise will appear on the screen for the duration of the corresponding SOA and then the face will appear within the noise field. The remainder of the trial will have the face in the noise. 
Methods
Participants
Twelve right-handed observers (of whom seven were male) participated in the experiment. We excluded one participant's data due to a lack of N170 response to the face stimuli. 
Stimuli
Each face appeared within a specified moderate noise amount (SNR = .43) after a determined SOA. Four SOAs were used: 0, 300, 450, and 600 ms. The faces all appeared at a medium contrast level of .25 and equally at both upright and inverted orientations. There were a total of eight conditions with equal number of trials per condition (108 trials), for a total of 864 trials per experiment, per participant. Sample stimuli and trial are shown in Figure 8
Figure 8
 
Sample stimuli from Experiment 4.
Figure 8
 
Sample stimuli from Experiment 4.
Procedure
The procedure that was followed was identical to that of Experiment 1 with the exception of the SOA condition. The noise was the same before and after the face appeared. 
Results and discussion
ERP data
The results for Experiment 4 are seen in Table 8 and Figure 9. We assume that an N170 response is present after a face appears, regardless of whether the face is preceded by noise. Therefore, we are able to determine when the N170 response to the faces at the different SOAs should occur by adding approximately 170 ms to the SOA amount. However, based on pilot data, the N170 seemed to occur closer to 195 ms after the face stimulus appeared. Therefore, we estimate each participant's actual N170 latency from their 0-ms SOA and find subsequent SOAs by adding that individual value to the current SOA. The N170 for SOA = 300 ms thus occurs around 495 ms; the N170 for SOA = 450 ms occurs around 645 ms, and the N170 for SOA = 600 ms occurs around 795 ms. To find the amplitudes for ANOVA analysis, we extracted the minimum amplitude within a 60-ms window centering on the approximated N170. In regard to the N170, therefore, for SOA = 0 ms, the window ranges from 150 to 175 ms (P1 window is between 80 and 140 ms); SOA = 300 ms has a window ranging from 465 to 525 ms (P1 window is between 380 and 440 ms); SOA = 450 ms has a window ranging from 615 to 675 ms (P1 window is between 530 and 590 ms); and SOA = 600 ms has a window ranging from 765 to 825 ms (P1 window is between 690 and 750 ms). 
Table 8
 
Amplitude values across conditions for components P1 and N170.
Table 8
 
Amplitude values across conditions for components P1 and N170.
SOA P1 N170
Upright Inverted Upright Inverted
PO7
0 ms 5.901 (0.824) 5.873 (0.720) 4.078 (1.172) 5.061 (1.198)
300 ms 8.872 (1.072) 9.158 (1.156) 2.700 (1.140) 2.263 (1.386)
450 ms 6.988 (0.894) 6.929 (0.989) 0.220 (1.156) −0.349 (1.448)
600 ms 5.108 (0.780) 5.044 (0.784) −0.973 (1.283) −1.643 (1.659)

PO8
0 ms 5.819 (0.798) 6.022 (0.752) 2.167 (1.389) 3.060 (1.503)
300 ms 9.817 (1.255) 9.973 (1.311) 0.855 (1.653) −0.524 (1.948)
450 ms 7.366 (1.241) 7.412 (1.176) −1.715 (1.693) −2.899 (2.084)
600 ms 4.972 (1.167) 5.514 (1.124) −3.075 (1.752) −3.958 (2.078)
Figure 9a, Figure 9b
 
Results from Experiment 4 for upright and inverted faces at four SOAs for channels PO7 (left) and PO8 (right).
Figure 9a, Figure 9b
 
Results from Experiment 4 for upright and inverted faces at four SOAs for channels PO7 (left) and PO8 (right).
When both the noise and face are presented simultaneously (SOA = 0 ms), we see a replication of the reversal effects seen in Experiments 1 and 2 (the inverted face has a smaller amplitude when compared with the upright face). The two subsequent SOAs (300 and 450 ms) show clear changes back to the original face inversion effect in which the inverted face has a larger amplitude than the upright face. These increased SOAs yield clear, selective recovery of the N170 for inverted faces compared with upright faces. This is supported by our ANOVA analysis that shows a significant main effect for SOA in channels PO7 and PO8, F(3, 42) = 15.37, p < .001 and F(3, 42) = 15.8, p < .001, respectively, and a significant interaction between inversion and SOA for channels PO7 and PO8, F(3, 42) = 3.384, p = .027 and F(3, 42) = 6.7, p < .001, respectively. The main effect of inversion was not significant in either channel PO7 or channel PO8, F(1, 14) < 1, p = .706 and F(1, 14) < 1, p = .419, respectively. 
When analyzing the P1 component, we see a significant main effect of SOA in channels PO7, F(3, 42) = 15.645, p = .000, and PO8, F(3, 42) = 15.235, p = .000, with a relative decrease in the P1 amplitude with each successive SOA. Similar to Experiment 2, the main effect of rotation is insignificant in channels PO7, F(1, 14) < 1, and PO8, F(1, 14) < 1. The interaction between SOA and rotation is also insignificant in channels PO7 and PO8, both F(3, 42) < 1. 
When we adjust the N170 in regard to the P1, we retain the same pattern of significance/insignificance as previously reported above. The main effect of SOA remains significant in channels PO7, F(3, 42) = 16.949, p = .000, and PO8, F(3, 42) = 20.812, p = .000, whereas the main effect of rotation remained insignificant in channels PO7 and PO8, F(1, 14) < 1 and F(1, 14) = 1.815, p = .199, respectively. The interaction between the two terms (SOA and rotation) also remained significant in channels PO7 and PO8, F(3, 42) = 3.984, p = .014 and F(3, 42) = 8.937, p < .001, respectively. 
At SOA = 600 ms, we see the waves for upright and inverted faces collapsing together. We can explain this pattern (post hoc) via eye movements. By 600 ms, eye movements may rejuvenate the effects of the noise, which then continues to interact with the inverted face processing. We see that both the upright and inverted face wave amplitudes start to converge again, possibly due to this renewed interaction. 
Behavioral data
When analyzing the behavioral correlates of this experiment, we do not see SOA to significantly affect accuracy ( M = .719, SD = .024; M = .744, SD = .024; M = .742, SD = .023; M = .737, SD = .025 for SOAs ranging from 0 to 450 ms, respectively). Unlike the effects of SOA on the N170 in the EEG correlates, participants were not affected by the different onsets of the face stimuli when identifying eye gender, as further shown through the insignificance of the main effects of SOA, F(3, 42) = 2.079, p = .118. Rotation also had little impact on participants' ability to determine the eye gender with accuracy on upright faces ( M = .735, SD = .023), which was very similar to inverted faces ( M = .736, SD = .025). Our repeated measures analysis on the main effect of rotation further illustrates the insignificance of rotation on accuracy, F(1, 14) = 0.000, p = .984. We also do not see a significant interaction between the two factors (SOA and rotation), F(3, 42) = 0.471, p = .704. 
Experiment 5
As stated earlier, our previous behavioral tasks were designed to be neutral to inversion and compatible with varying degrees of noise and contrast. However, in doing so, we placed strong emphasis on the eyes of the stimulus, and this may have resulted in the rest of the facial features disadvantaging the efficiency of the task and, thus, discouraged the use of configural information in the behavioral task. Our previous behavioral task was perhaps not measuring the same things that the ERP is measuring. 
To extend our ERP effects into a behavioral level, we developed a task that compares reaction time (RT) performance to faces and a control stimulus set, fingerprints, across both noise and contrast manipulations. Fingerprints were used because they share similar properties to faces. According to research by Busey and Vankerkolk (2005), fingerprints have features, definite orientation, and are arguably processed via configural mechanisms in latent print examiners, although novices do not show these configural patterns. 
Methods
Participants
Twenty-one right-handed Indiana University undergraduates (of whom seven were male) participated in the study. All had normal or corrected-to-normal vision, and their participation constituted part of their laboratory work or coursework. All were knowledgeable of the purpose and details of the experiment. 
Apparatus
Responses were recorded using a button box with millisecond resolution. Participants used their left thumb to indicate that a fingerprint was present and their right thumb to indicate whether a face was present (regardless of orientation). 
Stimuli
The face stimuli used in this behavioral task were the same as those used in the previous ERP task. The control stimuli consisted of eight unique fingerprints obtained from the NIST Special Database 27. As in the previous ERP studies, the stimuli subtended a visual angle of 3.6° face width and 4.8° face height. 
All stimuli were presented at both upright and inverted orientations and were shown randomly across two contrast levels and in either a no-noise or a moderate-noise level. As in the previous experiments, stimulus contrast was determined by luminance minus background gray level over the background gray level (which, in our experiments, was 47.8 cd/m 2). We produced noise by scrambling the phase of the stimuli. Therefore, even with the addition of the noise, we still preserved the total energy in the display. Our faces and fingerprints have slightly different amplitude spectrum. Because we did not want the noise itself to signal which stimulus was presented, we made our noise from a common average amplitude spectrum. Across each stimulus, though, the amplitude spectrum might change. We used SNRs that determined the amount of noise relative to the stimulus for each trial. These ratios range from SNR = 1 (no noise) to SNR = 0 (all noise). For the moderate-noise-level condition, we used an SNR of .4271 at a contrast level of 1.0 for the bright condition and .05 for the dim condition. In the no-noise condition (SNR = 1), the bright condition had a contrast level of 1.0 whereas the dim condition had a contrast level of .05. Because the noise was generated by shifting the phase of the spatial frequencies, the overall energy of the displays was constant across both the noise and no-noise conditions for each stimulus type. Any energy differences between stimulus types are due to the added energies of the stimuli themselves, not the noise. Sample stimuli can be seen in Figure 10
Figure 10
 
Sample stimuli from Experiment 5.
Figure 10
 
Sample stimuli from Experiment 5.
Procedure
The participants were told to respond as quickly and as accurately as possible to the presentation of either a face or a fingerprint via keypress. They were told that a stimulus would appear in every trial in one of two different contrast levels as well as either with or without noise. Also, they were told that they would hear differing audio feedback as to whether or not they had responded correctly. Participants were instructed to respond to stimulus type via keypress on the button box: top-left button for a face and top-right button for a fingerprint. Although there was no specified fixation point, all images appeared in the same centralized location for each trial. Participants were also instructed to limit both their body and eye movements while a stimulus was on the screen. 
Each stimulus was presented for 500 ms. We collapsed across gender conditions to yield a total of 16 conditions. We presented equal number of trials ( n = 50) per condition combination, for a total of 800 trials per experiment. Each participant ran through the experiment twice. 
Results and discussion
Results for Experiment 5 are shown in Figure 11. The presence of noise, inversion, and contrast all caused overall reductions in RTs in both stimulus types (fingerprints and faces). A repeated measures analysis also shows that participants were overall faster at responding to faces than they were to fingerprints, F(1, 41) = 12.253, p = .001. 
Figure 11a, Figure 11b
 
Data from Experiment 5. Faces are presented in both low contrast (left) and high contrast (right) across two noise levels. We see a significant interaction between noise and inversion in the low-contrast condition.
Figure 11a, Figure 11b
 
Data from Experiment 5. Faces are presented in both low contrast (left) and high contrast (right) across two noise levels. We see a significant interaction between noise and inversion in the low-contrast condition.
Our primary interest was to investigate the effects of noise on face inversion. When analyzing the faces, we see significant drops in RT as a function of added noise and contrast, F(1, 41) = 83.264, p < .001 and F(1, 41) = 136.570, p < .001, respectively. We also replicate the behavioral element of the face inversion effect and see an increase in RTs when faces are inverted, F(1, 41) = 24.339, p < .001. In the electrophysiological data, we saw a pronounced interaction between noise and rotation. However, behaviorally, this interaction is not significant, F(1, 41) = 1.316, p = .258. It is interesting, though, that the three-way interaction between noise, rotation, and contrast did yield significant results, F(1, 41) = 7.566, p = .009. 
Upon further analysis ( Figure 11 and Table 9), when we split the faces by contrast, we see a significant interaction between noise and rotation in the low-contrast (or dim) condition, F(1, 41) = 7.608, p = .009, but not in the high-contrast (or bright) condition, F(1, 41) = 2.166, p = .149. It is unknown whether we fail to get a significant interaction in the high-contrast condition due to a Type II error or possible compensatory mechanisms involved later in visual processing for high-contrast images. Further analysis will need to be done to map the nature of this interaction at a behavioral level. However, the low-contrast data are consistent with the electrophysiological data. 
Table 9
 
RT data across conditions for faces.
Table 9
 
RT data across conditions for faces.
RT data for faces
Upright Inverted
Low contrast
Noise 423.381 (4.596) 434.949 (4.752)
No noise 418.973 (5.572) 421.577 (4.827)

High contrast
Noise 415.656 (5.322) 419.456 (5.411)
No noise 401.080 (5.217) 409.318 (5.173)
Our RT results are not a function of a speed–accuracy trade-off ( Table 10). When analyzing accuracy across both contrast conditions, two factors caused significant decreased performance: noise, F(1, 41) = 13.739, p = .001, and contrast, F(1, 41) = 11.227, p = .002. Neither rotation, F(1, 41) = 0.061, p = .807, nor the interaction between noise and rotation, F(1, 41) = 1.816, p = .185, nor the three-way interaction between noise, contrast, and rotation, F(1, 41) = 1.714E−04, p = .665, proved to yield significant results. 
Table 10
 
Accuracy results across conditions for faces.
Table 10
 
Accuracy results across conditions for faces.
Accuracy data for faces
Upright Inverted
Low contrast
Noise .932 (.008) .936 (.010)
No noise .950 (.007) .947 (.007)

High contrast
Noise .945 (.007) .952 (.009)
No noise .961 (.006) .956 (.006)
We used fingerprints as our control stimuli and, therefore, did not expect to see rotation effects ( Table 11). This prediction was supported by our analysis that showed no significant main effect of rotation for fingerprints, F(1, 41) = 0.359, p = .552. However, both the presence of noise and lowered contrast did have significant detrimental effects on RT, F(1, 41) = 192.048, p < .001 and F(1, 41) = 152.331, p < .001, respectively, as well as the interaction between the two factors, F(1, 41) = 16.221, p < .001. Neither the interaction between noise and rotation nor the three-way interaction between noise, rotation, and contrast proved to be significant, F(1, 41) = 1.437, p = .237 and F(1, 41) = 0.419, p = .521, respectively. 
Table 11
 
RT data across conditions for fingerprints.
Table 11
 
RT data across conditions for fingerprints.
RT data for fingerprints
Upright Inverted
Low contrast
Noise 446.288 (5.816) 448.446 (6.101)
No noise 424.446 (6.101) 423.016 (5.523)

High contrast
Noise 423.887 (5.426) 422.883 (5.246)
No noise 411.915 (4.864) 409.544 (5.598)
When analyzing accuracy, we see that only the presence of noise causes a significant decline in performance, F(1, 41) = 7.596, p = .009. All other main effects and interactions did not cause significant differences in performance. Data for fingerprint conditions are found in Table 12
Table 12
 
Accuracy across conditions for fingerprints.
Table 12
 
Accuracy across conditions for fingerprints.
Accuracy for fingerprints
Upright Inverted
Low contrast
Noise .936 (.010) .941 (.010)
No noise .954 (.008) .956 (.008)

High contrast
Noise .946 (.008) .944 (.009)
No noise .955 (.007) .955 (.007)
In the behavioral experiment, we find converging evidence with our electrophysiological data and demonstrate an interaction between noise and rotation for faces, but only in the dim condition. Performance for fingerprints (our control) was not dependent on rotation or an interaction between noise and rotation. It seems likely that a corresponding electrophysiological design would yield little to no differences between upright or inverted fingerprints presented in either a noise or a no-noise condition (for non-fingerprint experts). We are currently planning to pursue such a design. 
General discussion
Across five experiments, we have three major conclusions. First, we have repeatedly replicated the Linkenkaer-Hansen et al. (1998) finding that shows the reversal of the N170 amplitudes for inverted and upright faces due to the degradation of a face via noise. Second, we have mapped the boundaries of the reversal phenomenon. The experiments have shown that the reversal is robust across all noise levels, is present only in full faces, and is not caused by merely degrading a face. Noise added to full faces is the essential combination that produces the reversal effects of the N170 amplitudes for upright and inverted faces. Finally, we demonstrate that separating the onset of static noise and the face can reverse the ordering of the N170 amplitudes. The interaction between noise and inversion was also illustrated at a behavioral level but only in low contrast. 
The question therefore becomes: Why does noise seem to affect inverted faces more than upright faces? To begin, note that the locus of the interaction between noise and inversion is not in the physical stimulus but must reside in the brain. Sensitivity to rotation is a function of experience (and possibly genetics) and does not depend on low-level features such as spatial frequency composition. While this might be a complication for experiments that compare across stimulus classes such as faces versus cars, rotation is a relatively neutral manipulation. There remains the possibility that inverted stimuli produce different patterns of eye movements, but we aligned the eyes of the stimuli such that they lie in the same spatial location in both upright and inverted faces. Because we gave participants a task that depended solely on the eyes, eye movements are unlikely to be a problem in our experiment. In addition, the brain components we are relying on are too early for a saccade to have an effect. Rotation as a manipulation cannot alter the salience of individual features or affect the relation between inversion and noise unless the features are interpreted differently by the visual system. 
Further support for the locus of the interaction not residing in the physical stimulus comes from the SOA manipulation of Experiment 4. Delaying the onset of the face reverses the relation between the N170 amplitude for upright and inverted faces. Again, because the static noise was shown continuously, the fact that its influence waned over time implies that neural fatigue or a related mechanism is involved. 
To return to our question of why noise affects inverted faces more than upright faces, we propose a theory in which upright and inverted faces are processed in part by different processing modes. These two modes could be a result of different populations of neurons or the same neurons acting in different ways. The central feature of this theory is that the processing mode that contributes primarily to upright faces remains relatively immune to interactions with the processing associated with the noise. As a result, it is affected less by the addition of noise. Inverted faces, however, are processed by a mechanism or mode that also responds to the noise. These two processing modes can allow for different responses even if both upright and inverted faces are processed by the same population of neurons (see Mazard, Schiltz, & Rossion, 2006; Perrett, Mistlin, et al., 1988; Perrett, Oram, & Ashbridge, 1998; Watanabe, Kakigi, & Puce, 2003, for examples). For convenience, we might label the first processing mode configural processing and the second mode featural processing
According to this account, noise interacts with these two processes in different ways. If the feature processing tends to respond to both the noise and the features of the inverted face, this leaves open the possibility for interactions that lead to masking of the features by the noise. This masking could take place because the individual features are better obscured by the noise or through more specific neural interactions such as competition or neuronal inhibition. At a more general level of our theory, we leave the exact mechanism unspecified, noting only that the noise affects the processing of inverted-face features more than upright-face features. 
According to our account, upright faces are processed in part through configural mechanisms, and our theory proposes that this configural processing is relatively immune to the interactions with the noise, either because the features that contribute to configural processing are larger in scale and are, thus, masked less by the noise or because configural processing remains isolated from the neural activity responsible for coding the noise. Regardless of the precise mechanism (which we discuss below), the more general version of the theory proposes that configural processing is somewhat isolated from the more general-purpose featural processing. This isolation could occur in time (configurality may allow processing to occur more rapidly for holistic information than for individual features), in space (the spatial scale might differ for the two types of processing), or even anatomically (if the two types of processing are subserved by different populations of neurons). 
The neuronal model can be generated from the more general model described above. The general model describes masking via noise. However, we interpret the noise disrupting the inverted-face processing in terms of masking via inhibition. It has been shown that amplitude-matched noise elicits a BOLD response and is treated as a visual stimulus by later visual areas such as V4 and the posterior fusiform sulcus (Tjan, Lestou, & Kourtzi, 2006). If both noise and inverted faces are processed by similar mechanisms, then presenting them simultaneously will result in a competition for limited resources. One possible form of this interaction is neuronal inhibition in which neurons compete via inhibition to represent both inverted faces and noise. This explains why we get the decreased N170 for inverted faces presented in noise. However, if upright faces are processed by an additional configural mechanism, then those configurally processing neurons will be less affected by the presence of noise and, thus, will be relatively immune to this inhibition. 
Further support for this neuronal model is found in Experiment 4 data. Delaying the onsets of the faces in static noise allows the resources to become available to allocate to the inverted face, which produces a selective recovery for the inverted-face N170. More specifically, separating the onset of the face from the noise could allow those neurons that process the noise to adapt and fatigue to the presence of the noise. Therefore, when the face appears, those neurons processing the face are relatively less affected by the presence of the noise and able to respond robustly to either the upright or the inverted face with minimal interaction from the noise. This explains why we see the selective recovery of the inverted face at the longer SOAs. 
Within the face-processing community, there is a growing theory that the N170 is largely driven by the eyes of the face stimulus (see Bentin et al., 1996; Itier et al., 2006; Taylor, Edmonds, McCarthy, & Allison, 2001, for example). A recent model proposed by Itier et al. suggests that the N170 is produced as a function of neural contributions from separate populations of eye-selective and face-selective neurons. They suggest that these “eye neurons” are suppressed by the addition of other facial features and, therefore, respond maximally to eyes-alone stimuli and inverted-face stimuli (in which the configurality of the face is distorted and the eye region is isolated). The face-selective neurons show opposite effects to inversion and respond maximally to upright faces. 
The results of our data can also be incorporated into this model. If the eyes were masked more than the other features, then the neurons responding to the eyes would be less able to respond and the N170 would only reflect the contribution of the face-selective neurons. However, when the face is inverted, the neuronal response from the face-selective neurons would also be attenuated, causing an overall greater decrease of the N170. This could explain why we get the N170 reversal in our noise conditions ( Experiment 2) and not in the contrast reduction conditions ( Experiment 3). While it seems unlikely that the noise would selectively mask the eyes more than the other features, it is a possibility that these “eye neurons” are more sensitive to the presence of noise than the more face-selective neurons. 
In addition, if the noise is adapted to before the onset of the face, then facial features, including the eyes, would remain robust against the masking of the noise. This would allow the “eye neurons” to respond maximally in conditions in which the holistic configuration of the face is distorted and there is no inhibition from the “face neurons,” that is, an inverted face. This corresponds with the selective recovery of the N170 for inverted faces shown at SOA > 0 ms in static noise in Experiment 4. This account is similar to the more general model proposed earlier. By applying the terms “face-sensitive population” and “eye-sensitive population” to configural and featural processing mechanisms (respectively), we can extend the Itier et al. (2006) model to further account for our data. These two model versions differ primarily on what they define as featural and configural processing. 
Is this interaction specific to only our noise? While this is a possible concern, the fact that the Linkenkaer-Hansen et al. (1998) paper used a separate form of high spatial frequency pixilated noise and still produced a reversal effect suggests that the effect is caused by the properties of noise in general, not a specific type of noise. Therefore, one could possibly extend our experiments to include other forms of noise. 
On a more practical level, researchers have used both static and dynamic noise but may not have considered the possibility that the choice of noise may affect the interaction with their stimuli in regard to factors such as neuronal fatigue. Our Experiment 4 results suggest that dynamic noise may allow the neurons that respond to the noise to refresh, whereas static noise may cause those neurons to fatigue. This has implications for how the neurons processing the different stimuli interact. To verify this, we are currently pursuing an experiment that investigates possible differences between the two noise types. 
Finally, the results of these experiments also have possible implications outside the world of face processing and into further understanding the development of visual expertise. It seems plausible that certain neurons respond most robustly to learned visual stimuli (i.e., faces, greebles, and fingerprints). Gauthier, Tarr, Anderson, Skudlarski, and Core (1999) provide fMRI evidence that the FFA is recruited as a function of expertise. After extensive training with a novel stimulus set (greebles), participants exhibited increased activation in right-hemisphere face areas for upright greebles as compared with inverted greebles. This finding also suggests that the FFA may be recruited to support any visual stimulus that receives extensive training. 
A recent study by Tanaka and Curran (2001) suggests that bird experts visually process bird images similarly to faces and differently from less-known objects. Busey and Vankerkolk (2005) used electrophysiological measures to show evidence for similar processing mechanisms for fingerprints and faces in latent print examiners. Both inverted fingerprints and faces yielded larger N170 amplitudes than their upright counterparts in experts, whereas novices show no distinction between upright and inverted fingerprints. This suggests that fingerprints and faces are processed by similar mechanisms in fingerprint experts but not in novices. 
The separation of these two mechanisms implies different properties that may enable them to respond robustly despite the presence of noise or other nonexpertise stimuli. We are currently developing a longitudinal study involving latent print examiners, which investigates and measures to what degree these properties emerge with the development of visual expertise. Our paradigm will show systematic tracking of both the development of configural processing and the electrophysiological interaction of stimuli and noise as expertise development. 
Acknowledgments
This research was supported by NIJ 2005-MU-BX-K076 and R01 AG022334 grants. We would like to thank Bosco Tjan for contributing his noise generation Matlab code and for his comments and advice. We also want to thank Elizabeth Cook, Dean Wyatte, and Jeff Gilleland for their research assistance. 
Commercial relationships: none. 
Corresponding author: Bethany L. Schneider. 
Email: bschneid@indiana.edu. 
Address: 1101 E 10th St., Bloomington, IN 47405. 
References
Avidan, G. Harel, M. Hendler, T. Ben-Bashat, D. Zohary, E. Malach, R. (2002). Contrast sensitivity in human visual areas and its relationship to object recognition. Journal of Neurophysiology, 87, 3102–3116. [PubMed] [Article] [PubMed]
Bentin, S. Allison, T. Puce, A. Perez, A. McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. [CrossRef] [PubMed]
Busey, T. A. Vankerkolk, J. R. (2005). Behavioral and electrophysiological evidence for configural processing in fingerprint experts. Vision Research, 45, 431–448. [PubMed] [CrossRef] [PubMed]
Carey, S. Diamond, R. (1977). From piecemeal to configural representation of faces. Science, 195, 312–314. [PubMed] [CrossRef] [PubMed]
Carey, S. Diamond, R. Woods, B. (1980). The development of race recognition: A maturational component. Developmental Psychology, 16, 257–269. [CrossRef]
Delorme, A. Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134, 9–21. [PubMed] [CrossRef] [PubMed]
Eimer, M. (2000). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport, 11, 2319–2324. [PubMed] [CrossRef] [PubMed]
Farah, M. J. Wilson, K. D. Drain, M. Tanaka, J. N. (1998). What is “special” about face perception? Psychological Review, 105, 298–482. [PubMed] [CrossRef]
Gauthier, I. Tarr, M. J. Anderson, A. W. Skudlarski, P. Gore, J. C. (1999). Activation of the middle fusiform ‘face area’ increases with expertise in recognizing novel objects. Nature Neuroscience, 2, 568–573. [PubMed] [Article] [CrossRef] [PubMed]
Horovitz, S. G. Rossion, B. Skudlarski, P. Gore, J. C. (2004). Parametric design and correlational analyses help integrating fMRI and electrophysiological data during face processing. Neuroimage, 22, 1587–1595. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2004). Source analysis of the N170 to faces and objects. Neuroreport, 15, 1261–1265. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Latinus, M. Taylor, M. J. (2006). Face, eye and object early processing: What is the face specificity? Neuroimage, 29, 667–676. [PubMed] [CrossRef] [PubMed]
Jemel, B. Schuller, A. M. Cheref-Khan, Y. Goffaux, V. Crommelinck, M. Bruyer, R. (2003). Stepwise emergence of the face-sensitive N170 event-related potential component. Neuroreport, 14, 2035–2039. [PubMed] [CrossRef] [PubMed]
Leder, H. Bruce, V. (2000). When inverted faces are recognized: The role of configural information in face recognition. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 53, 513–536. [PubMed] [CrossRef]
Linkenkaer-Hansen, K. Palva, J. M. Sams, M. Hietanen, J. K. Aronen, H. J. Ilmoniemi, R. J. (1998). Face-selective processing in human extrastriate cortex around 120 ms after stimulus onset revealed by magneto- and electroencephalography. Neuroscience Letters, 253, 147–150. [PubMed] [CrossRef] [PubMed]
Maurer, D. Grand, R. L. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
Mazard, A. Schiltz, C. Rossion, B. (2006). Recovery from adaptation to facial identity is larger for upright than inverted faces in the human occipito-temporal cortex. Neuropsychologia, 44, 912–922. [PubMed] [CrossRef] [PubMed]
McKone, E. Martini, P. Nakayama, K. (2001). Categorical perception of face identity in noise isolates configural processing. Journal of Experimental Psychology: Human Perception and Performance, 27, 573–599. [PubMed] [CrossRef] [PubMed]
McKone, E. Martini, P. Nakayama, K. Peterson, M. A. Rhodes, G. (2003). Isolating holistic processing in faces (and perhaps object In. Perception of faces, objects and scenes: Analytic and holistic processes. Advances in visual cognition. (viii, pp. 92–117). London: Oxford University Press.
Perrett, D. I. Mistlin, A. J. Chitty, A. J. Smith, P. A. Potter, D. D. Broennimann, R. (1988). Specialized face processing and hemispheric asymmetry in man and monkey: Evidence from single unit and reaction times studies. Behavioural Brain Research, 29, 245–258. [PubMed] [CrossRef] [PubMed]
Perrett, D. I. Oram, M. W. Ashbridge, E. (1998). Evidence accumulation in cell populations responsive to faces: An account of generalisation of recognition without mental transformations. Cognition, 67, 111–145. [PubMed] [CrossRef] [PubMed]
Philips, R. J. Rawles, R. E. (1979). Recognition of upright and inverted faces: A correlational study. Perception, 43, 39–56. [PubMed]
Riesenhuber, M. Jarudi, I. Gilad, S. Sinha, P. (2004). Face processing humans is compatible with a simple shape-based model of vision. Proceedings of the Royal Society of London B: Biological Sciences, 271, 448–450. [PubMed] [Article] [CrossRef]
Rossion, B. Gauthier, I. (2002). How does the brain process upright and inverted faces? Behavioral and Cognitive Neuroscience Reviews, 1, 62–74. [CrossRef]
Scapinello, K. F. Yarmey, A. D. (1970). The role of familiarity and orientation in immediate and delayed recognition of pictorial stimuli. Psychonomic Science, 21, 329–331. [CrossRef]
Sekuler, A. B. Gaspar, C. M. Gold, J. M. Bennett, P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14, 391–396. [PubMed] [Article] [CrossRef] [PubMed]
Tanaka, J. W. Curran, T. (2001). A neural basis for expert object recognition. Psychological Science, 12, 43–47. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 46, 225–484. [PubMed] [CrossRef] [PubMed]
Tanskanen, T. Näsänen, R. Montez, T. Päällysaho, J. Hari, R. (2005). Face recognition and cortical responses show similar sensitivity to noise spatial frequency. Cerebral Cortex, 15, 526–534. [PubMed] [Article] [CrossRef] [PubMed]
Tarkiainen, A. Cornelissen, P. L. Salmelin, R. (2002). Dynamics of visual feature analysis and object-level processing in faces versus letter-string perception. Brain, 125, 1125–1136. [PubMed] [Article] [CrossRef] [PubMed]
Taylor, M. J. Edmonds, G. E. McCarthy, G. Allison, T. (2001). Eyes first! Eye processing develops before face processing in children. Neuroreport, 12, 1671–1676. [PubMed] [CrossRef] [PubMed]
Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483–484. [PubMed] [CrossRef] [PubMed]
Tjan, B. S. Lestou, V. Kourtzi, Z. (2006). Uncertainty and invariance in the human visual cortex. Journal of Neurophysiology, 96, 1556–1568. [PubMed] [CrossRef] [PubMed]
Watanabe, S. Kakigi, R. Puce, A. (2003). The spatiotemporal dynamics of the face inversion effect: A magneto- and electro-encephalographic study. Neuroscience, 116, 879–895. [PubMed] [CrossRef] [PubMed]
Wichmann, F. A. Braun, D. I. Gegenfurtner, K. R. (2006). Phase noise and the classification of natural images. Vision Research, 46, 1520–1529. [PubMed] [CrossRef] [PubMed]
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Yovel, G. Kanwisher, N. (2004). Face perception: Domain specific, not process specific. Neuron, 44, 889–898. [PubMed] [Article] [PubMed]
Figure 1
 
Scalp channel locations. Channels PO7 and PO8 are in red.
Figure 1
 
Scalp channel locations. Channels PO7 and PO8 are in red.
Figure 2
 
Sample stimuli for Experiment 1. Stimuli are presented either upright or inverted in both low and high contrast levels at two noise levels: no noise added and noise added.
Figure 2
 
Sample stimuli for Experiment 1. Stimuli are presented either upright or inverted in both low and high contrast levels at two noise levels: no noise added and noise added.
Figure 3a, Figure 3b, Figure 3c, Figure 3d
 
Data from Experiment 3. Top panels refer to upright and inverted faces presented in high contrast level in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted faces presented in low contrast level in channels (c) PO7 and (d) PO8.
Figure 3a, Figure 3b, Figure 3c, Figure 3d
 
Data from Experiment 3. Top panels refer to upright and inverted faces presented in high contrast level in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted faces presented in low contrast level in channels (c) PO7 and (d) PO8.
Figure 4a, Figure 4b
 
Sample stimuli for Experiment 2 from lowest noise (SNR = .6) to highest noise (SNR = .3).
Figure 4a, Figure 4b
 
Sample stimuli for Experiment 2 from lowest noise (SNR = .6) to highest noise (SNR = .3).
Figure 5a, Figure 5b, Figure 5c, Figure 5d
 
Data from Experiment 2. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 5a, Figure 5b, Figure 5c, Figure 5d
 
Data from Experiment 2. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 6a, Figure 6b
 
Sample stimuli from Experiment 3 from highest contrast (.83) to lowest contrast (.05).
Figure 6a, Figure 6b
 
Sample stimuli from Experiment 3 from highest contrast (.83) to lowest contrast (.05).
Figure 7a, Figure 7b, Figure 7c, Figure 7d
 
Data from Experiment 3. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 7a, Figure 7b, Figure 7c, Figure 7d
 
Data from Experiment 3. Top panels refer to upright and inverted full faces presented at four different contrast levels in channels (a) PO7 and (b) PO8. Bottom panels refer to upright and inverted eyes alone presented at four different contrast levels in channels (c) PO7 and (d) PO8.
Figure 8
 
Sample stimuli from Experiment 4.
Figure 8
 
Sample stimuli from Experiment 4.
Figure 9a, Figure 9b
 
Results from Experiment 4 for upright and inverted faces at four SOAs for channels PO7 (left) and PO8 (right).
Figure 9a, Figure 9b
 
Results from Experiment 4 for upright and inverted faces at four SOAs for channels PO7 (left) and PO8 (right).
Figure 10
 
Sample stimuli from Experiment 5.
Figure 10
 
Sample stimuli from Experiment 5.
Figure 11a, Figure 11b
 
Data from Experiment 5. Faces are presented in both low contrast (left) and high contrast (right) across two noise levels. We see a significant interaction between noise and inversion in the low-contrast condition.
Figure 11a, Figure 11b
 
Data from Experiment 5. Faces are presented in both low contrast (left) and high contrast (right) across two noise levels. We see a significant interaction between noise and inversion in the low-contrast condition.
Table 1
 
Amplitude values for components P1 and N170 across conditions.
Table 1
 
Amplitude values for components P1 and N170 across conditions.
P1 amplitude No noise Noise
Upright Inverted Upright Inverted
High contrast
PO7 7.117 (1.249) 9.185 (1.436) 9.479 (1.539) 11.047 (1.577)
PO8 8.503 (2.042) 9.691 (2.037) 10.939 (2.332) 12.170 (2.343)

Low contrast
PO7 7.056 (1.299) 8.668 (1.325) 8.736 (1.159) 10.117 (1.274)
PO8 8.061 (1.866) 9.168 (1.866) 10.190 (1.833) 10.896 (1.888)

N170 amplitude No noise Noise
Upright Inverted Upright Inverted
High contrast
PO7 −1.059 (1.591) −2.618 (1.372) 2.319 (1.566) 3.467 (1.354)
PO8 −1.472 (1.476) −4.197 (1.554) 2.317 (1.788) 3.022 (1.951)

Low contrast
PO7 −1.209 (1.353) −2.334 (1.513) 1.542 (1.466) 3.166 (1.355)
PO8 −1.687 (1.052) −3.982 (1.792) 1.642 (1.770) 2.270 (1.788)
Table 2
 
Accuracy across conditions for Experiment 1.
Table 2
 
Accuracy across conditions for Experiment 1.
Accuracy across conditions
No noise Noise
Upright .893 (.025) .734 (.034)
Inverted .888 (.036) .765 (.036)
Table 3
 
Amplitude values across conditions for components P1 and N170.
Table 3
 
Amplitude values across conditions for components P1 and N170.
P1 N170
Upright Inverted Upright Inverted
Full face (PO7)
SNR = .30 11.981 (1.378) 11.813 (1.158) 6.079 (1.680) 6.460 (1.383)
SNR = .38 12.200 (1.200) 11.546 (1.493) 3.827 (1.503) 5.618 (1.546)
SNR = .48 10.808 (1.400) 11.959 (1.343) 1.060 (1.616) 2.997 (1.524)
SNR = .60 11.162 (1.493) 11.908 (1.426) −0.00474 (1.875) 1.437 (1.776)
Full face (PO8)
SNR = .30 13.010 (1.788) 12.327 (1.771) 6.438 (1.884) 6.684 (1.872)
SNR = .38 12.484 (1.707) 12.593 (1.870) 2.434 (1.602) 5.022 (1.702)
SNR = .48 11.725 (1.982) 13.405 (1.745) −2.317 (1.608) 1.008 (1.598)
SNR = .60 12.715 (1.877) 12.948 (1.947) −3.195 (1.919) −1.372 (1.703)

Eyes alone (PO7)
SNR = .30 10.088 (1.163) 9.617 (1.156) 4.326 (1.430) 3.871 (1.495)
SNR = .38 10.414 (1.169) 8.739 (1.202) 2.102 (1.656) 1.262 (1.675)
SNR = .48 9.802 (1.026) 8.247 (1.152) 1.201 (1.548) −1.083 (1.515)
SNR = .60 9.116 (1.295) 7.457 (1.218) −1.938 (1.389) −4.124 (2.024)

Eyes alone (PO8)
SNR = .30 11.558 (1.503) 10.061 (1.337) 2.854 (1.512) 1.975 (1.294)
SNR = .38 11.306 (1.270) 9.600 (1.345) −.221 (1.726) −1.033 (1.539)
SNR = .48 9.898 (1.082) 9.114 (1.067) −1.834 (1.744) −4.159 (1.721)
SNR = .60 9.755 (1.252) 8.039 (1.331) −5.345 (1.781) −6.641 (1.892)
Table 4
 
Behavioral data for Experiment 2.
Table 4
 
Behavioral data for Experiment 2.
Interaction between rotation and stimulus type
Eyes alone Full face
Upright .798 (.020) .628 (.021)
Inverted .749 (.020) .630 (.019)
Table 5
 
Amplitude values across conditions for components P1 and N170.
Table 5
 
Amplitude values across conditions for components P1 and N170.
P1 N170
Upright Inverted Upright Inverted
Full face (PO7)
Contrast = .05 6.275 (0.745) 5.537 (0.690) −1.332 (1.225) −2.650 (1.301)
Contrast = .12 8.009 (0.881) 8.091 (0.709) −2.106 (1.162) −3.956 (1.420)
Contrast = .31 9.302 (0.947) 10.109 (0.911) −1.012 (1.119) −3.365 (1.498)
Contrast = .83 9.684 (1.045) 11.695 (1.238) −1.373 (1.166) −2.467 (1.350)

Full face (PO8)
Contrast = .05 6.630 (0.883) 5.975 (0.881) −2.078 (1.153) −3.564 (1.255)
Contrast = .12 7.391 (0.878) 8.532 (0.643) −3.113 (1.162) −4.745 (1.407)
Contrast = .31 8.501 (0.878) 9.889 (0.926) −2.408 (0.986) −5.024 (1.357)
Contrast = .83 8.700 (0.925) 10.801 (1.195) −2.714 (1.077) −4.155 (1.366)

Eyes alone (PO7)
Contrast = .05 3.761 (0.530) 3.029 (0.515) −2.103 (1.131) −2.876 (1.092)
Contrast = .12 5.481 (0.745) 5.282 (0.688) −3.771 (1.582) −4.429 (1.507)
Contrast = .31 7.763 (0.722) 6.306 (0.758) −4.075 (1.502) −4.731 (1.393)
Contrast = .83 8.587 (0.989) 7.696 (0.838) −4.996 (1.540) −4.772 (1.299)

Eyes alone (PO8)
Contrast = .05 3.292 (0.525) 2.603 (0.432 −2.790 (0.947) −4.117 (0.988)
Contrast = .12 5.135 (0.841) 4.969 (0.662) −5.590 (1.356) −6.274 (1.271)
Contrast = .31 7.161 (0.734) 5.714 (0.619) −6.676 (1.231) −6.525 (1.250)
Contrast = .83 8.559 (0.966) 7.046 (0.797) −7.418 (1.430) −6.741 (1.235)
Table 6
 
Behavioral data for Experiment 3.
Table 6
 
Behavioral data for Experiment 3.
Interaction between contrast levels and stimulus type
Contrast = .05 Contrast = .31 Contrast = .57 Contrast = .83
Eyes alone .672 (.028) .807 (.030) .858 (.028) .875 (.029)
Full face .668 (.027) .779 (.030) .777 (.030) .793 (.033)
Table 7
 
Psychometric equivalence between SNR and contrast.
Table 7
 
Psychometric equivalence between SNR and contrast.
Experiment 2 SNR values Corresponding contrast levels
.6 0.169797
.48 0.112316
.38 0.066534
.3 0.037559
Table 8
 
Amplitude values across conditions for components P1 and N170.
Table 8
 
Amplitude values across conditions for components P1 and N170.
SOA P1 N170
Upright Inverted Upright Inverted
PO7
0 ms 5.901 (0.824) 5.873 (0.720) 4.078 (1.172) 5.061 (1.198)
300 ms 8.872 (1.072) 9.158 (1.156) 2.700 (1.140) 2.263 (1.386)
450 ms 6.988 (0.894) 6.929 (0.989) 0.220 (1.156) −0.349 (1.448)
600 ms 5.108 (0.780) 5.044 (0.784) −0.973 (1.283) −1.643 (1.659)

PO8
0 ms 5.819 (0.798) 6.022 (0.752) 2.167 (1.389) 3.060 (1.503)
300 ms 9.817 (1.255) 9.973 (1.311) 0.855 (1.653) −0.524 (1.948)
450 ms 7.366 (1.241) 7.412 (1.176) −1.715 (1.693) −2.899 (2.084)
600 ms 4.972 (1.167) 5.514 (1.124) −3.075 (1.752) −3.958 (2.078)
Table 9
 
RT data across conditions for faces.
Table 9
 
RT data across conditions for faces.
RT data for faces
Upright Inverted
Low contrast
Noise 423.381 (4.596) 434.949 (4.752)
No noise 418.973 (5.572) 421.577 (4.827)

High contrast
Noise 415.656 (5.322) 419.456 (5.411)
No noise 401.080 (5.217) 409.318 (5.173)
Table 10
 
Accuracy results across conditions for faces.
Table 10
 
Accuracy results across conditions for faces.
Accuracy data for faces
Upright Inverted
Low contrast
Noise .932 (.008) .936 (.010)
No noise .950 (.007) .947 (.007)

High contrast
Noise .945 (.007) .952 (.009)
No noise .961 (.006) .956 (.006)
Table 11
 
RT data across conditions for fingerprints.
Table 11
 
RT data across conditions for fingerprints.
RT data for fingerprints
Upright Inverted
Low contrast
Noise 446.288 (5.816) 448.446 (6.101)
No noise 424.446 (6.101) 423.016 (5.523)

High contrast
Noise 423.887 (5.426) 422.883 (5.246)
No noise 411.915 (4.864) 409.544 (5.598)
Table 12
 
Accuracy across conditions for fingerprints.
Table 12
 
Accuracy across conditions for fingerprints.
Accuracy for fingerprints
Upright Inverted
Low contrast
Noise .936 (.010) .941 (.010)
No noise .954 (.008) .956 (.008)

High contrast
Noise .946 (.008) .944 (.009)
No noise .955 (.007) .955 (.007)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×