Free
Research Article  |   May 2009
Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse low spatial frequency information?
Author Affiliations
Journal of Vision May 2009, Vol.9, 12. doi:10.1167/9.5.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Petra H. J. M. Vlamings, Valerie Goffaux, Chantal Kemner; Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse low spatial frequency information?. Journal of Vision 2009;9(5):12. doi: 10.1167/9.5.12.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

Rapidly decoding the emotional content of a face is an important skill for successful social behavior. Several Event Related brain Potential (ERP) have indicated that emotional expressions already influence brain activity as early as 100 ms. Some studies hypothesized that this early brain response to fear depends on coarse-magnocellular inputs, which are primarily driven by Low Spatial Frequency (LSF) cues. Until now however, evidence is inconclusive probably due to the divergent methods used to match luminance and contrast across spatial frequencies and emotional stimuli. In the present study, we measured ERPs to LSF and HSF faces with fearful or neutral expressions when contrast and luminance was matched across SF or not. Our findings clearly show that fearful facial expressions increases the amplitude of P1 (only for contrast–luminance equated images) and N170 in comparison to neutral faces but only in LSF faces, irrespective of contrast or luminance equalization, further suggesting that LSF information plays a crucial role in the early brain responses to fear. Furthermore, we found that, irrespective of luminance or contrast equalization, N170 occurred faster when perceiving LSF faces than HSF faces, again emphasizing the primacy of LSF processing in early face perception.

Introduction
Rapidly decoding the emotional content of a face is an important skill for successful social behavior, since it helps to evaluate the states and intentions of others and to adapt future behavior. The time course of emotional face processing has been explored in several EEG and MEG studies. These studies report that particularly negative expressions affect the amplitude and/or latency of various early ERP and MEG components related to face processing. The two most prominently studied components are P1 (e.g., Ashley, Vuilleumier, & Swick, 2004; Batty & Taylor, 2003; Pizzagalli, Regard, & Lehmann, 1999; Pizzagalli et al., 2002) and N170 (e.g., Batty & Taylor, 2003; Campanella, Quinet, Bruyer, Crommelinck, & Guerit, 2002; Righart & de Gelder, 2005; Stekelenburg & de Gelder, 2004). P1 is a fast exogenous response to visual stimulation, which reflects striate as well as extrastriate visual processing (Gomez Gonzales, Clark, Luck, Fan, & Hillyard, 1994; Heinze et al., 1994; Rossion et al., 1999). It occurs about 100 ms after stimulus onset and is located over lateral occipital regions of the scalp. N170 occurs at around 170 ms and is maximally recorded at occipitotemporal electrodes (Bentin, Allison, Puce, Perez, & McCarthy, 1996). N170 is the earliest ERP component to consistently show larger amplitude for faces than other non-face object categories (e.g., Jacques & Rossion, 2004). N170 originates from a network of regions, probably including the fusiform gyrus, inferior occipital cortex, superior temporal sulcus, and the inferior, middle, and superior temporal gyri (Henson et al., 2003). N170 reflects not only the detection of a face but also the encoding of the structure or configuration of the face, based on which individual faces can be discriminated from each other (Jacques & Rossion, 2006). 
A faster P1 for fearful expressions as compared to neutral expressions has been noted in several studies and is consistent with the idea that negative emotions tend to capture attention in an involuntary reflexive manner and, as a consequence, tend to be processed faster (Eimer & Holmes, 2002; Lidell, Williams, Rathjen, Shevrin, & Gordon, 2004; Pourtois, Dan, Grandjean, Sander, & Vuilleumier, 2005; Pourtois, Grandjean, Sander, & Vuilleumier, 2004; Williams et al., 2004). Other studies report effects of emotional expressions only somewhat later, at the level of the face-specific N170 (Batty & Taylor, 2003; Blau, Maurer, Tottenham, & McCandliss, 2007; Campanella et al., 2002; Stekelenburg & de Gelder, 2004). 
Overall, the above-mentioned ERP studies underline the rapid processing of emotional expressions. However, facial expressions are complex stimuli and it is still not clear what information the visual system extracts in order to decode emotional expressions at such an early stage. Any input to the visual system consists of luminance variations occurring at various frequencies across space (e.g., De Valois & De Valois, 1988; Goldstein, 1999). Low spatial frequencies (LSF) of an image capture large-scale luminance variations (i.e., coarse information) whereas high spatial frequencies (HSF) represent small-scale luminance variations of the image (i.e., fine information; De Valois & De Valois, 1988; Goldstein, 1999). The spatial frequency content of a stimulus is generally expressed in cycles per degree of visual angle (c/deg). The present experiment addresses the visual input properties of facial expression processing by means of spatial frequency filtering. 
Two fMRI studies have now provided evidence for the importance of LSF information in emotional expression processing. One study showed that LSF information in a face is crucial to produce an increase in activation to fearful relative to neutral faces in the amygdala (Vuilleumier, Armony, Driver, & Dolan, 2003), a key subcortical structure in emotional processing (Morris, Ohman, & Dolan, 1999; Whalen et al., 1998). In contrast, high spatial-frequency (HSF) information in faces did not evoke a differential response to fearful compared to neutral expressions in the amygdala. A similar pattern was found in the fusiform cortex (Vuilleumier et al., 2003; Vuilleumier, Richardson, Armony, Driver, & Dolan, 2004; Winston, Vuilleumier, & Dolan, 2003). Given the modulatory role of the amygdala (Morris et al., 1998; Rotshtein, Malach, Hadar, Graif, & Hendler, 2001), the hypothesis was raised that the enhanced processing of fear in the visual areas (including fusiform gyrus) is primarily mediated by rapid LSF cues, possibly via feedback from the amygdala, which gets input from a rapid magnocellular tecto-pulvinar pathway, preferentially tuned to LSF (Pourtois et al., 2005; Vuilleumier et al., 2003; Winston et al., 2003). However, since fMRI has a low temporal resolution, the observed activation patterns may alternatively mirror SF differences occurring at late decisional stages rather than at the early visual analyses. 
In contrast to fMRI, the technique of ERP has a high temporal resolution and is therefore well suited to give insight into the stages at which LSF and HSF inputs become important for facial expression processing in visual brain areas. However, only two ERP studies investigated the respective contribution of LSF and HSF to early facial expression processing (Holmes, Winston, & Eimer, 2005; Pourtois et al., 2005). An increased P1 for LSF fearful expressions relative to LSF neutral expression was found at occipitotemporal electrodes by Pourtois et al. (2005). In the other study, by Holmes, Winston et al. (2005), the occipitotemporal P1 was not analyzed. At the level of the N170, neither study found an interaction between spatial frequency and emotional expression. 
However, methodological aspects may account for the reported absence of SF influence on the early processing of facial expressions. A problem is that, when investigating SF processing, outputs of LSF and HSF filtering differ not only at the level of the spatial scale of information they convey but also in terms of luminance and contrast. This is related to the fact that the frequency power in natural stimuli is maximal at low SF and almost exponentially decays at higher SF (see for review Loftus & Harley, 2005). In the study of Pourtois et al. (2005), the overall difference in luminance and contrast between LSF and HSF images was matched by using hybrid stimuli. Pourtois et al. (2005) combined the LSF content of a given face (shown upright) with the HSF content of the same face presented upside down, or vice versa. However, since contrast/luminance was not equated between LSF and HSF components within a given hybrid, superimposing an inverted LSF image on an HSF image may have disrupted perception of the expression carried by HSF more strongly, than superimposing an inverted HSF image on an LSF image (see Pourtois et al., 2005, Figure 1). This might have prevented the detection of an effect of emotional expression at the level of P1 and/or N170 for HSF faces. 
The aim of the present study was to further explore the contribution of LSF and HSF to early emotional processing of faces, as reflected by P1 and N170 ERP components. To evaluate whether SF differences previously reported for emotion processing can be accounted by contrast and luminance, we directly investigated emotional processing when luminance and contrast were equated or not across LSF and HSF, within one study. 
Differential sensitivity to HSF and LSF contents of emotional expression is also apparent in some behavioral tasks. Whereas HSF seem to be relevant for the elaborate rating of emotional expressiveness as well as emotion discrimination (Schyns & Oliva, 1999; Vuilleumier et al., 2003; but see Goren & Wilson (2006) for different effects using synthetic faces), LSFs are important for rapid attentional responses to fear (Holmes, Green, & Vuilleumier, 2005) as well as rapid categorization of happiness, sadness, and anger (Schyns & Oliva, 1999), although the latter has not been investigated for fear. Complementing Experiment 1, we will investigate in Experiment 2 whether an LSF advantage for the processing of fear is reflected in reaction times (RTs) when subjects have to rapidly categorize neutral and fearful faces. Because RTs reflect the final stage of information processing, at which information about the facial expression of a face might be available in both LSF and HSF, RT may reveal emotion effects for HSF faces as well. We also evaluated whether differential effects in SF and emotion in RT are related to differences in luminance and contrast between the images. 
Experiment 1
Methods
Participants
Twenty students (10 females and 10 males; mean age: 21.7 ( SD= 2.7)) from Maastricht University participated in this study. Four participants were left-handed, the others were right-handed. All participants had normal or corrected-to-normal vision. The experiment was approved by the ethical committee and participants gave written informed consent before participation. 
Stimuli
Face stimuli consisted of 16 grayscale images (8 males; 8 females), one half depicting a neutral expression, the other half depicting a fearful expression. Different identities displayed different emotions. The photographs were taken from the NimStim Face Set (http://www.macbrain.org/faces/index.htm, Tottenham, Borscheid, Ellertsen, Marcus, & Nelson, 2002) and have shown to evoke emotional effects at the level of N170 (Blau et al., 2007). Face images included European-American and African-American models. Face pictures were trimmed to remove external features (neck, ears, and hairline). All pictures were fitted in a gray frame of 500 × 700 pixels. Each face subtended 6.3 degrees of visual angle (at a 113-cm viewing distance). The HSF images were created by filtering the original photographs, using a high-pass cutoff that was ≥6 cycles/deg of visual angle (≥36 cycles per object). The LSF images were created using a low-pass filter that was ≤2 cycles/deg of visual angle (≤12 cycles per object). These cutoffs (≤2 cycles/deg; ≥6 cycles/deg) were based on previous literature (e.g., Boeschoten, Kenemans, van Engeland, & Kemner, 2007; Costen, Parker, & Craw, 1994; Deruelle & Fagot, 2005; Deruelle, Rondan, Gepner, & Tardif, 2004; Deruelle, Rondan, Salle-Collemiche, Bastard-Rosset, & Da Fonséca, 2008; Goffaux, Gauthier, & Rossion, 2003; Schyns & Oliva, 1999). Filtering was performed in Matlab (The Mathworks, Natick, MA) using a set of Gaussian filters. After the filtering, HSF and LSF stimuli largely differed in terms of luminance and Root Mean Square (RMS) contrast (LSF: Mean luminance: 147; RMS contrast: 40; HSF: Mean Luminance: 131; RMS Contrast: 12.5). RMS contrast has been shown to be the best index for perceived contrast in natural images (Bex & Makous, 2002). Finally, global contrast and luminance were equated across scales by assigning both HSF and LSF images the mean luminance and RMS contrast of the 16 original broadband photographs (Mean Luminance: 141; RMS contrast: 29, see Figure 1). Since original contrast and luminance of LSF filtered faces were naturally close to full spectrum values, the equalization of these parameters to full spectrum values mostly affected HSF faces. 
Figure 1
 
Example of stimuli used in the Non-EQ task in which HSF and LSF images differ in luminance and contrast, and the EQ task in which HSF and LSF stimuli were matched on contrast and luminance.
Figure 1
 
Example of stimuli used in the Non-EQ task in which HSF and LSF images differ in luminance and contrast, and the EQ task in which HSF and LSF stimuli were matched on contrast and luminance.
Procedure
Participants were seated in a dimly and quiet room and were presented with two series (in a counterbalanced order) of 4 experimental blocks of 79 trials each. In one series, HSF and LSF faces were equated for luminance and contrast; in the other series, luminance and contrast differed between HSF and LSF images. Each block contained 32 neutral face trials (HSF and LSF) and 32 fearful face trials (HSF and LSF), each presented for 500 ms, in randomized order with an interstimulus interval that varied randomly between 1600 and 1800 ms. To maintain attention to the task, each block contained 15 animation figures (for example Disney figures), each with a duration of 2 s. Participants had to press a response button as soon as they saw an animation figure on the screen and had to refrain from responding to all other images. Animation figures were included because we want to conduct this experiment with children in the future. 
ERP recording and data analysis
During task performance, the EEG (0.1–200 Hz, sampling rate of 500 Hz) was recorded with a 31-channel Quickcap (Neuromedical supplies of Neurosoft) covering frontal, central, temporal, and parietal scalp areas. An electrode attached at the left mastoid served as a reference. Afz was used as ground electrode. Blink and vertical eye movements were monitored with electrodes placed at the sub- and supra-orbital ridge of the right eye. Lateral eye movements were monitored with two electrodes placed on the right and left external canthi. All electrode impedances (EEG and EOG) were kept below 10 kΩ. 
The EEG data were analyzed offline using “Vision Analyser” software (Brain Products, Munich, Germany). A common average reference was recomputed for all electrodes. EEG epochs were extracted beginning 200 ms before and ending 400 ms after each stimulus . The 200 ms prior to stimulus onset was used as baseline. The epochs were band-pass filtered with a 30 Hz, 24 dB/octave low pass filter. Artifacts from vertical eye movements and blinks were reduced with the algorithm of Gratton, Coles, and Donchin (1983). Thereafter, all epochs containing artifacts, amplitudes larger than 75 μV, were removed. Separate ERP averages were computed for the four stimulus conditions (SF (HSF/LSF) × Emotion (Fear/Neutral)). For each condition, P1 and N170 latencies and amplitudes were automatically extracted at peak-maximum occipitotemporal electrodes PO7/PO8 and P7/P8 (time windows: P1: 70–140 ms; N170: 100–200 ms). Both P1 and N170 were largest at these electrodes. All peaks were confirmed by visual inspection. Latency and amplitude values were subjected to a repeated-measure ANOVA with SF (HSF versus LSF), Emotion (Fear versus Neutral), Equalization (luminance/contrast equalized, i.e., EQ versus luminance/contrast non-equalized, i.e., Non-EQ), Electrode Position (posterior versus posterior occipital), and Hemisphere (Left versus Right) as within-subject factors. Based on a priori hypothesis conditions were further compared using paired t-tests. 
Results
Figure 2 shows the grand averages for neutral and fearful HSF and LSF faces at PO7/PO8 and P7/P8, separately for EQ and Non-EQ stimuli. 
Figure 2
 
Grand averages for HSF (black) and LSF (gray) fearful (dashed) and neutral (continuous) faces at channel P7/P8 and PO7/PO8. Fearful faces elicit larger amplitudes than neutral faces in the LSF condition only. Furthermore, the topographical distribution of the difference (Neutral minus Fear (160–220 ms)) in ERP activity between fearful and neutral faces is shown for N170. Note that the difference covers the ventro-temporal areas.
Figure 2
 
Grand averages for HSF (black) and LSF (gray) fearful (dashed) and neutral (continuous) faces at channel P7/P8 and PO7/PO8. Fearful faces elicit larger amplitudes than neutral faces in the LSF condition only. Furthermore, the topographical distribution of the difference (Neutral minus Fear (160–220 ms)) in ERP activity between fearful and neutral faces is shown for N170. Note that the difference covers the ventro-temporal areas.
P1 amplitude
Statistical analysis of P1 amplitudes revealed a significant interaction between SF, Emotion, Equalization, and Hemisphere ( F(1,19) = 7.62, p < 0.05). Separate analysis for left and right hemispheres showed that the SF × Emotion × Equalization interaction was only significant for the right hemisphere ( F(1,19) = 6.22, p < 0.05). Further analysis indicated that there was a significant interaction between Emotion and Equalization at P8/PO8 solely in LSF ( F(1,19) = 4.56, p < 0.05). Hence, pairwise comparisons computed on P8/PO8 electrodes showed that LSF fearful faces elicited larger P1 amplitudes than LSF neutral faces only in EQ condition ( t(19) = −2.39, p < 0.05). 
P1 latency
The overall four-way ANOVA indicated a significant interaction between SF and Emotion ( F(1,19) = 5.65, p < 0.05). Further analysis of this interaction indicated that faster latencies for LSF vs. HSF faces occurred irrespective of facial expression (neutral: t(19) = 5.48, p < 0.001; fear: t(19) = 5.24, p < 0.001). Yet, there was a trend toward a significant effect of emotion for LSF faces only ( t(19) = −1.788, p = 0.090). SF × Equalization interaction was also significant ( F(1,19) = 8.15, p < 0.05) indicating significantly longer latencies for HSF non-EQ faces compared to HSF EQ faces ( t(19) = −2.45, p ≤ 0.05), whereas in the LSF condition, there was no latency difference between EQ and non-EQ conditions. In addition, faster latencies for LSF vs. HSF faces were found for both EQ (mean difference: 6 ms; t(19) = 4.11, p = 0.001) and non-EQ faces (mean difference 13 ms; t(19) = 5.11, p ≤ 0.001). 
N170 amplitude
Statistical analysis of N170 amplitude indicated a significant main effect of Hemisphere ( F(1,19) = 7.46, p < 0.05). Larger N170 amplitudes were found for the right compared to the left hemisphere. Furthermore, there was a significant interaction between SF, Emotion, and Electrode ( F(1,19) = 4.99, p < 0.05). ANOVA separately conducted in LSF and HSF indicated a significant interaction between Emotion and Electrode for LSF faces only ( F(1,19) = 22.09, p < 0.01). Although LSF fearful faces elicited larger N170 amplitudes compared to neutral faces at both electrode positions (PO7/PO8: t(19) = 8.23, p < 0.001, P7/P8: t(19) = 9.17, p < 0.001), this effect was stronger on PO7/PO8 (mean difference: 1.43 μV) as compared to P7/P8 electrodes (mean difference: 1.03 μV). 
N170 latency
The overall four-way ANOVA indicated a significant SF × Equalization interaction ( F(1,19) = 82.52, p < 0.001). Further analysis of this interaction indicated a significant effect of SF in both EQ and non-EQ conditions (Non-EQ: F(1,19) = 12.07, p < 0.001; EQ: F(1,19) = 7.31, p < 0.001) with shorter latencies for LSF compared to HSF faces. Yet, N170 latency differences between HSF and LSF were larger in non-EQ (mean difference: 23 ms) than EQ conditions (mean difference: 8 ms). Furthermore, HSF faces in the Non-EQ task showed significantly longer latencies than HSF faces in the Non-EQ task ( t(19) = −4.93, p < 0.001), whereas there was no difference between EQ and non-EQ images for LSF faces. 
Summary
Consistent with our hypotheses, the early influences of emotion as observed at the level of P1 and N170 amplitudes were only present in the LSF condition. For P1, the effect of emotion in the LSF condition was only significant in right hemisphere and only in the equalized condition. In contrast, for N170 this effect was found irrespective of contrast equalization. In addition, a main effect of SF was found for P1 and N170 latencies, indicating faster latencies for LSF compared to HSF faces. This effect was found irrespective of contrast equalization, although it was smaller in the equalized condition due faster latencies to HSF images. 
Experiment 2
In this experiment, we investigated whether the LSF advantage for the processing of facial expressions found in ERPs in Experiment 1, using a passive paradigm, is also reflected in reaction times when subjects have to perform an active categorization task on emotional expression. Although an LSF bias has been shown for the rapid categorization of emotional stimuli including happiness, sadness, and anger (Schyns & Oliva, 1999), this has not been investigated for fear. There is however evidence that LSF are important for rapid attentional responses to fear (Holmes, Green et al., 2005). Based on our ERP findings and literature, we predict increased sensitivity to fear in LSF and faster categorization of LSF fearful compared to LSF neutral faces. Alternatively, in contrast to ERP observations, emotion effects on RT may be obvious on both scales. Contrary to P1 and N170, behavioral RT reflects the final stage of visual processing. At this final stage, information about the facial expression of a face might be available in both LSF and HSF and RT may reveal emotion effects for HSF faces as well. 
Participants
Twenty adult subjects (10 women and 10 men; mean age: 22.21 years ( SD: 2.97)) participated in this experiment. The data of one participant were not included in the analysis because performance was below chance in one of the conditions. One participant was left-handed, the others were right-handed. Participants had normal or corrected-to-normal vision. 
Stimuli, task, and data analysis
Experimental conditions were identical to Experiment 1 (same stimuli, same duration, etc.), except that no animation figures occurred between face stimuli and that participants had to decide whether the presented stimulus was a fearful or a neutral face by pressing the appropriate keyboard button (left/right arrow). Key assignments were counterbalanced across subjects. Instructions emphasized speeded and accurate decisions. Trials with reaction times shorter than 150 ms and expanding 1500 ms were discarded. Reaction times were subjected to an SF (HSF/LSF) × Emotion (Neutral/Fear) × Equalization (Non-EQ/EQ) repeated-measure ANOVA. Furthermore, bias-free sensitivity indexes ( D′) were computed for each subject in all conditions (following Stanislaw & Todorov, 1999). 
Results
Reaction times
Repeated-measures ANOVA revealed a main effect of emotion ( F(1,18) = 21.51, p < 0.001) on RTs indicating shorter reaction times to fearful (mean: 544 ms) than neutral faces (mean: 566 ms; see Figure 3). Furthermore, a significant Spatial Frequency × Equalization interaction was found ( F(1,18) = 4.96, p < 0.05). Further analysis of this interaction indicated that participants were overall faster at categorizing LSF faces than HSF faces in both non-EQ ( F(1,18) = 50.09, p < 0.001) and EQ conditions ( F(1,18) = 15.28, p < 0.001) but that this effect was stronger in the Non-EQ task (see Figure 3). 
D′
SF significantly influenced D′ ( F(1,18) = 5.83, p < 0.05), indicating that participants were more sensitive to fear in the LSF condition than in the HSF condition, irrespective of equalization (see Figure 3). There were no other significant effects or interactions on D′. 
Figure 3
 
Mean reaction times (RT) and D′ (+ SE) for all stimulus conditions in the Non-Equalized (Non-EQ) and Equalized (EQ) task.
Figure 3
 
Mean reaction times (RT) and D′ (+ SE) for all stimulus conditions in the Non-Equalized (Non-EQ) and Equalized (EQ) task.
Discussion
In the present study, we investigated the role and time course of LSF and HSF information in the decoding of facial expressions. Based on previous literature, we expected to find that an early modulation of P1 and/or N170 by facial expression is primarily mediated by LSF. The second goal of our study was to investigate whether SF influences on emotional processing genuinely reflect the differential involvement of LSF and HSF or whether these are merely due to contrast and luminance differences across SF. We used emotional stimuli that were known to affect ERPs in broadband viewing conditions (REF) and filtered them to preserve either HSF or LSF. HSF and LSF faces were either matched (i.e., EQ condition) or not matched (i.e., non-EQ condition) on luminance and contrast. As expected, emotional expression modulated the amplitude of early visual ERPs, but only when presented in LSF. The increased P1 amplitude for LSF fear was observed in the right hemisphere and only when contrast/luminance was adjusted to fit full spectrum values (i.e., EQ condition). The N170 amplification related to LSF fear was observed in both EQ and non-EQ conditions. In contrast, HSF fear did not affect P1 or N170 amplitude. 
Several studies have reported increased P1 amplitude for negative emotional compared to neutral broadband face stimuli (Batty & Taylor, 2003; Pizzagalli et al., 1999, 2002). Increased P1 amplitude may reflect the allocation of attentional resources to emotionally significant stimuli (Eimer & Holmes, 2002; Lidell et al., 2004; Pourtois et al., 2005, 2004; Williams et al., 2004). The present findings indicate that P1 modulation by facial expression is primarily driven by LSF cues. This effect was only significant in the right hemisphere, which is consistent with several studies that show a right hemisphere advantage for the processing of faces and emotional expressions (e.g., Halgren, Raij, Marinkovic, Jousmaki, & Hari, 2000; Kawasaki et al., 2001; Pizzagalli et al., 1999, 2002; Rossion, Joyce, Corell, & Tarr, 2003; Streit, Wöwler, Brinkmeyer, Ihl, & Gaebel, 2000; Williams, Palmers, Lidell, Song, & Gordon, 2006). 
The present P1 findings are also in agreement with the study of Pourtois et al. (2005). These authors also showed that the early modulation of P1 by facial expression was primarily driven by LSF. These authors controlled for luminance and contrast differences across scales by using hybrid stimuli. Hybrid stimuli are built by superimposing an LSF-filtered image with an image filtered to preserve HSF. However, in the study of Pourtois et al. (2005), contrast/luminance was not equated between LSF and HSF within a given hybrid. In the present study, no hybrid stimuli were used and since the contribution of luminance and contrast was systematically addressed, we provide unequivocal evidence that P1 effects are primarily driven by LSF, using HSF and LSF stimuli that are matched on contrast and luminance. Emotion effects were absent for both HSF and LSF stimuli in the non-equalized condition, possibly because early visual evoked potentials are very sensitive to luminance and contrast alternations (Blau et al., 2007; Ellemberg, Hammarrenger, Lepore, Roy, & Guillemot, 2001). Contrast and luminance differences across HSF and LSF trials in non-equalized blocks might have impeded early P1 differences related to LSF emotion. 
A recent ERP study found evidence for enhanced early brain responses in extrastriate areas to threatful non-facial stimuli presented in LSF at occipital electrodes at a similar time range as the P1 (Carrétie, Hinojosa, López-Martín, & Tapia, 2007). This suggests that the early response to emotion in visual areas in the time range of P1 is not face specific but possibly results from threat signals in general, irrespective of the object category, as was also suggested by Pourtois et al. (2005). 
In addition to the findings on P1, we also found an enhanced N170 to fearful expressions in LSF only. In contrast, Holmes, Winston et al. (2005) and Pourtois et al. (2005) did not report any effect of emotion or interaction between emotion and SF for N170 amplitude. However, Holmes, Winston et al. (2005) and Pourtois et al. (2005) also did not report emotion effects on N170 in broadband viewing conditions in contrast to previous literature (Batty & Taylor, 2003; Blau et al., 2007; Campanella et al., 2002; Eger, Jedynak, Iwaki, & Skrandies, 2003; Krombholz, Schaefer, & Boucsein, 2007; Stekelenburg & de Gelder, 2004). N170 is thought to be a face-specific component that reflects encoding of the structure of a face in such a way that it can be differentiated from others (Jacques & Rossion, 2006) and is influenced by emotional expressions (Batty & Taylor, 2003; Blau et al., 2007; Campanella et al., 2002; Stekelenburg & de Gelder, 2004). An absence of emotional processing at N170 could be related to the mixed presentation of face stimuli (66%) and objects (33%) of similar size and color in the study of Holmes, Winston et al. (2005) and the type of task (gender discrimination) in the study of Pourtois et al. (2005), which might take attention away from the emotional signals. To our knowledge, all previous studies using broadband stimuli also failed to report any N170 emotional effects when they used a mixed presentation of objects and face stimuli (Ashley et al., 2004; Holmes, Vuilleumier, & Eimer, 2003). Furthermore, Krombholz et al. (2007) report that tasks in which facial emotion is irrelevant might impede emotional processing at the level of N170. However, these suggestions are subject for future study and are beyond the scope of the present study. Nonetheless, the present results on N170 show that early (<170 ms) encoding of fearful facial expressions is based on LSF input. Importantly, the interaction between SF and Emotion was significant for both contrast/luminance EQ and Non-EQ images, indicating that the absence of an effect of emotion at N170 is not merely due to HSF images being less luminant and containing less contrast than HSF images. 
Schyns, Petro, and Smith (2007) have provided interesting insights on how N170 reflects face processing over time. More specifically, subjects categorized stimuli consisting of samples of information, randomly sampled in x, y and SF dimensions of face images (see Figure 1 in Schyns et al., 2007). On each trial, a face was presented, which was partly revealed by a mid-gray mask punctured by a number of randomly located Gaussian windows (called “bubbles”) revealing information from five non-overlapping SF bands. By using classification image techniques, Schyns et al. (2007) investigated which combination of SF bands and image features was diagnostic for the categorization of each expression and how this related to N170. Interestingly, the ERP results of Schyns et al. (2007) showed that the initial processing of a fearful face starts by processing of the eyes at around 120-ms post-stimulus onset and is followed by the processing of mouth information. Their data further suggest that LSF information located around the mouth and HSF information located around the eyes may be particularly diagnostic for processing facial fear (see Figure 2 in Schyns et al., 2007); nevertheless, the authors did not explicitly address or quantify the HSF or LSF contribution to facial expression categorization. 
In contrast to the study of Schyns et al. (2007), the contribution of the various facial features to the perception of fear cannot be disentangled in the present study, as only SF content was manipulated. However, it has been shown that facial expression processing as well as many other aspects of face processing is not the mere outcome of purely local feature analyses but rather relies on the integration of features into a so-called holistic representation (Calder, Young, Keane, & Dean, 2000; Maurer, Le Grand, & Mondloch, 2002; see also for review Goffaux & Rossion, 2006). Since it decomposes face stimuli into parts, the use of bubbles may artificially induce a local bias in face emotional processing (Goffaux & Rossion, 2006; Rossion, 2008). In agreement with this suggestion is the larger N170 amplitude in left hemisphere as well as the faster information integration, as reflected by shorter N170 peak in this hemisphere in the study of Schyns et al. (2007). Left hemisphere advantages for face processing (and other types of stimuli) have namely been specifically linked to local, as opposed to, global processing, which has been found to be dominated by the right hemisphere (see Iidaka, Yamashita, Kashikura, & Yonekura, 2004). 
In our ERP study, SF not only affected P1 and N170 amplitudes but also largely modulated their latencies. LSF images were processed faster than HSF images irrespective of luminance or contrast equalization, as indicated by the shorter P1 and N170 latencies for LSF as compared to HSF images. Contrast/luminance equalization in the present study only influenced the processing of HSF faces, irrespective of emotional content. HSF images were more luminant and more contrasted in the EQ task than in the Non-EQ task. In contrast, luminance and contrast manipulation only slightly modulated LSF images across studies since their luminance/contrast was naturally close to broadband values to which they were normalized. Consequently, HSF EQ faces were processed faster than HSF non-EQ faces, as indicated by faster RT as well as shorter P1 and N170 latencies. The higher contrast transmitted in HSF EQ images also induced larger P1 amplitudes. 
SF influences on ERP latency are consistent with electrophysiological face studies that reported shorter ERP latencies to LSF than HSF facial stimuli (Halit, de Haan, Schyns, & Johnson, 2006; Hsiao, Hsieh, Lin, & Chan, 2005; McCarthy, Puce, Belger, & Allision, 1999) as well as visual evoked potential studies that reported SF effects on ERP latency for non-facial stimuli (Mihaylova, Stomonyakov, & Vassilev, 1999; Musselwhite & Jeffreys, 1985). This temporal precedence of LSF compared to HSF is consistent with previous findings that the neuronal pathways sensitive to LSF and HSF have dissociable time scales with faster cortical arrivals of information processed in the parvocellular (mainly sensitive to HSF) compared to magnocellular (mainly sensitive to LSF) system (Bullier, Schall, & Morel, 1996; Klistorner, Crewther, & Crewther, 1997; Maunsell et al., 1999; Schroeder, Tenke, Arezzo, & Vaughan, 1989; see for review Laycock, Crewther, & Crewther, 2007). 
In Experiment 2, we investigated whether the LSF advantage for the processing of facial expressions found in ERPs in Experiment 1 would be reflected in RT when subjects have to complete an active categorization task. In contrast to the ERP findings, we found an effect of facial expression for both HSF and LSF faces in the EQ as well as the Non-EQ task. Participants decided more quickly that a face was fearful than that it was neutral, irrespective of SF content. RTs to HSF stimuli were overall slower than to LSF stimuli. This is consistent with several studies that indicate that stimuli signaling threat receive preferential attention over neutral stimuli (see for review Holmes, Green et al., 2005) and behavioral studies indicating faster processing of LSF over HSF stimuli (Coin, Versace, & Tiberghien, 1992; Parker, Lishman, & Hughes, 1992, 1997). The lack of interaction between SF and emotion in RT could be related to the fact that RT reflects the final stage of information processing, when the facial expression of a face is available for both LSF and HSF. Our results suggest that HSF participate to fear processing but at a later stage (>170 ms). In addition to these reaction time effects and consistent with our ERP findings, analysis of SDT measures showed that subjects were more sensitive to fear in LSF faces than HSF faces, irrespective of contrast/luminance equalization. This matches a recent study showing that a connectionist model better classified fearful faces based on LSF than HSF input (Mermillod, Guyader, Vuilleumier, Alleysson, & Marendaz, 2005). 
As a final point, we would like to discuss some limitations to the present study. It specifically aimed at investigating the role of HSF and LSF in facial expression processing. Like previous studies, the present study did not include intermediate SF bands, which are known to best carry face identity (Costen et al., 1994; Costen, Parker, & Craw, 1996; Parker & Costen, 1999) and could also play a role in the rapid processing of facial expressions. To our knowledge, the role of intermediate SF to facial expression processing had not been studied and should be addressed by measuring behavioral and electrophysiological responses to fear while SF content of face images is parametrically shifted from low to high spatial frequencies (see Tanskanen, Näsänen, Montez, Päällysaho, & Hari, 2005 for a similar approach using non-emotional faces). 
Based on the observation that amygdala activity to fearful faces is related to the processing of LSF information, Vuilleumier et al. (2003) argued that amygdala has a rapid access to LSF components of visual information, either through a direct subcortical tecto-pulvinar route preferentially tuned to LSF information, or through the initial feedforward sweep within the visual system. Here, we provide high-temporal resolution evidence that LSF input is processed fast and drives the early detection of fearful expressions as indicated by an enhanced P1 and N170 across ventro-temporal areas for LSF only. Since the amygdala modulates activity in visual areas via feedback signals (Morris et al., 1998; Rotshtein et al., 2001), the amygdala may rapidly enhance ventral temporal visual responses to faces at P1 and N170 latencies. This hypothesis needs further investigations, combining EEG and fMRI methodology and connectivity analyses. 
In sum, our behavioral as well psychophysiological findings indicate that LSF are important for the rapid extraction of facial expression information. For the first time, we have shown that a modulation of the face-specific N170 by facial expression is primarily mediated by LSF information and that this effect cannot be explained by differences in luminance and contrast between HSF and LSF faces. Furthermore, in a behavioral study we found that in addition to LSF, HSF also contain important information about the facial expression of a face, which might be important at a later stage of information processing. The present ERP findings might be important for all studies investigating the role of SF in facial expression recognition. Especially, for children with psychiatric disorders like autism, for whom deficits in LSF processing have been suggested (Johnson, 2005). For these populations, the passive ERP technique, as used in the current study, is an excellent method. 
Acknowledgments
The authors gratefully thank Pia Jansen and Sabine Koenraads for helping with the data collection and Judith Peters for previous comments on the manuscript. 
Commercial relationships: none. 
Corresponding author: Petra Vlamings. 
Email: p.vlamings@psychology.unimaas.nl. 
Address: Biological Developmental Psychology Section, Faculty of Psychology, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht, The Netherlands. 
References
Ashley, V. Vuilleumier, P. Swick, D. (2004). Time course and specificity of event-related potentials to emotional expressions. Neuroreport, 15, 211–216. [PubMed] [CrossRef] [PubMed]
Batty, M. Taylor, M. J. (2003). Early processing of the six basic facial emotional expressions. Brain Research: Cognitive Brain Research, 17, 613–620. [PubMed] [CrossRef] [PubMed]
Bentin, S. Allison, T. Puce, A. Perez, E. McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. [CrossRef] [PubMed]
Bex, P. J. Makous, W. (2002). Spatial frequency, phase, and the contrast of natural images. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 19, 1096–1106. [PubMed] [CrossRef] [PubMed]
Blau, V. C. Maurer, U. Tottenham, N. McCandliss, B. D. (2007). The face-specific N170 component is modulated by emotional facial expression. Behavioral and Brain Functions, 3, 7. [CrossRef] [PubMed]
Boeschoten, M. A. Kenemans, J. L. van Engeland, H. Kemner, C. (2007). Face processing in Pervasive Developmental Disorder (PDD: The roles of expertise and spatial frequency. Journal of Neural Transmission, 114, 1619–1629. [PubMed] [CrossRef] [PubMed]
Bullier, J. Schall, J. D. Morel, A. (1996). Functional streams in occipito-frontal connections in the monkey. Behavioural Brain Research, 76, 89–97. [PubMed] [CrossRef] [PubMed]
Calder, A. J. Young, A. W. Keane, J. Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26, 527–551. [PubMed] [CrossRef] [PubMed]
Campanella, S. Quinet, P. Bruyer, R. Crommelinck, M. Guerit, J. M. (2002). Categorical perception of happiness and fear facial expressions: An ERP study. Journal of Cognitive Neuroscience, 15, 210–227. [PubMed] [CrossRef]
Carrétie, L. Hinojosa, J. A. López-Martín, S. Tapia, M. (2007). An electro-physiological study on the interaction between emotional content and spatial frequency of visual stimuli. Neuropsychologia, 45, 1187–1195. [PubMed] [CrossRef] [PubMed]
Coin, C. Versace, R. Tiberghien, G. (1992). Role of spatial frequencies and exposure duration in face processing: Potential consequences on the memory format of facial representations. European Bulletin of Cognitive Psychology, 12, 79–98.
Costen, N. P. Parker, D. M. Craw, I. (1994). Spatial content and spatial quantisation effects in face recognition. Perception, 23, 129–146. [PubMed] [CrossRef] [PubMed]
Costen, N. P. Parker, D. M. Craw, I. (1996). Effects of high-pass and low-pass spatial filtering on face identification. Perception & Psychophysics, 58, 602–612. [PubMed] [CrossRef] [PubMed]
Deruelle, C. Fagot, J. (2005). Categorizing facial identities, emotions, and genders: Attention to high- and low-spatial frequencies by children and adults. Journal of Experimental Child Psychology, 90, 172–184. [PubMed] [CrossRef] [PubMed]
Deruelle, C. Rondan, C. Gepner, B. Tardif, C. (2004). Spatial frequency and face processing in children with autism and Asperger syndrome. Journal of Autism and Developmental Disorders, 34, 199–210. [PubMed] [CrossRef] [PubMed]
Deruelle, C. Rondan, C. Salle-Collemiche, X. Bastard-Rosset, D. Da Fonséca, D. (2008). Attention to low- and high-spatial frequencies in categorizing facial identities, emotions and gender in children with autism. Brain and Cognition, 66, 115–123. [PubMed] [CrossRef] [PubMed]
De Valois, R. L. De Valois, K. K. (1988). Spatial Vision. New York: Oxford University Press.
Eger, E. Jedynak, A. Iwaki, T. Skrandies, W. (2003). Rapid extraction of emotional expression: Evidence from evoked potential fields during brief presentation of face stimuli. Neuropsychologia, 41, 808–817. [PubMed] [CrossRef] [PubMed]
Eimer, M. Holmes, A. (2002). An ERP study on the time course of emotional face processing. Neuroreport, 25, 427–431. [PubMed] [CrossRef]
Ellemberg, D. Hammarrenger, B. Lepore, F. Roy, M. S. Guillemot, J. P. (2001). Contrast dependency of VEPs as a function of spatial frequency: The parvocellular and magnocellular contributions to human VEPs. Spatial Vision, 15, 99–111. [PubMed] [CrossRef] [PubMed]
Goffaux, V. Gauthier, I. Rossion, B. (2003). Spatial scale contribution to early visual differences between face and object processing. Brain Research: Cognitive Brain Research, 16, 416–424. [PubMed] [CrossRef] [PubMed]
Goffaux, V. Rossion, B. (2006). Faces are “spatial”—Holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception and Performance, 32, 1023–1039. [PubMed] [CrossRef] [PubMed]
Goldstein, E. B. (1999). Sensation and perception. Pacific Grove, CA: Brooks/Cole.
Gomez Gonzales, C. M. Clark, V. P. Fan, S. Luck, S. J. Hillyard, S. A. (1994). Sources of attention sensitive visual event-related potentials. Brain Topography, 7, 41–51. [PubMed] [CrossRef] [PubMed]
Goren, D. Wilson, H. R. (2006). Quantifying facial expression recognition across viewing conditions. Vision Research, 46, 1253–1262. [PubMed] [CrossRef] [PubMed]
Gratton, G. Coles, M. G. Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55, 468–484. [PubMed] [CrossRef] [PubMed]
Halgren, E. Raij, T. Marinkovic, K. Jousmaki, V. Hari, R. (2000). Cognitive response profile of the human fusiform face area as determined by MEG. Cerebral Cortex, 10, 69–81. [PubMed] [Article] [CrossRef] [PubMed]
Halit, H. de Haan, M. Schyns, P. G. Johnson, M. H. (2006). Is high-spatial frequency information used in the early stages of face detection? Brain Research, 1117, 154–161. [PubMed] [CrossRef] [PubMed]
Heinze, H. J. Mangun, G. R. Burchert, W. Hinrichs, H. Scholz, M. Münte, T. F. (1994). Combined spatial and temporal imaging of brain activity during selective attention in humans. Nature, 372, 543–546. [PubMed] [CrossRef] [PubMed]
Henson, R. N. Goshen-Gottstein, Y. Ganel, T. Otten, L. J. Quayle, A. Rugg, M. D. (2003). Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cerebral Cortex, 13, 793–805. [PubMed] [Article] [CrossRef] [PubMed]
Holmes, A. Green, S. Vuilleumier, P. (2005). The involvement of distinct visual channels in rapid attention towards fearful facial expressions. Cognition and Emotion, 19, 899–922. [CrossRef]
Holmes, A. Vuilleumier, P. Eimer, M. (2003). The processing of emotional facial expression is gated by spatial attention: Evidence from event-related brain potentials. Brain Research: Cognitive Brain Research, 16, 174–184. [PubMed] [CrossRef] [PubMed]
Holmes, A. Winston, J. S. Eimer, M. (2005). The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression. Brain Research: Cognitive Brain Research, 25, 508–520. [PubMed] [CrossRef] [PubMed]
Hsiao, F. J. Hsieh, J. C. Lin, Y. Y. Chang, Y. (2005). The effects of face spatial frequencies on cortical processing revealed by magnetoencephalography. Neuroscience Letters, 380, 54–59. [PubMed] [CrossRef] [PubMed]
Iidaka, T. Yamashita, K. Kashikura, K. Yonekura, Y. (2004). Spatial frequency of visual image modulates neural responses in the temporo-occipital lobe An investigation with event-related fMRI. Brain Research: Cognitive Brain Research, 18, 196–204. [PubMed] [CrossRef] [PubMed]
Jacques, C. Rossion, B. (2004). Concurrent processing reveals competition between visual representations of faces. Neuroreport, 15, 2417–2421. [PubMed] [CrossRef] [PubMed]
Jacques, C. Rossion, B. (2006). The speed of individual face categorization. Psychological Science, 17, 485–492. [PubMed] [CrossRef] [PubMed]
Johnson, M. H. (2005). Subcortical face processing. Nature Reviews, Neuroscience, 6, 766–774. [PubMed] [CrossRef]
Kawasaki, H. Kaufman, O. Damasio, H. Damasio, A. R. Granner, M. Bakken, H. (2001). Single-neuron responses to emotional visual stimuli recorded in human ventral prefrontal cortex. Nature Neuroscience, 4, 15–16. [PubMed] [CrossRef] [PubMed]
Klistorner, A. Crewther, D. P. Crewther, S. G. (1997). Separate magnocellular and parvocellular contributions from temporal analysis of the multifocal VEP. Vision Research, 37, 2161–2169. [PubMed] [CrossRef] [PubMed]
Krombholz, A. Schaefer, F. Boucsein, W. (2007). Modification of N170 by different emotional expression of schematic faces. Biological Psychology, 76, 156–162. [PubMed] [CrossRef] [PubMed]
Laycock, R. Crewther, S. G. Crewther, D. P. (2007). A role for the ‘magnocellular advantage’ in visual impairments in neurodevelopmental and psychiatric disorders. Neuroscience and Biobehavioral Reviews, 31, 363–376. [PubMed] [CrossRef] [PubMed]
Lidell, B. J. Williams, L. M. Rathjen, J. Shevrin, H. Gordon, E. (2004). A temporal dissociation of subliminal versus supraliminal fear perception: An event-related potential study. Journal of Cognitive Neuroscience, 16, 479–486. [PubMed] [CrossRef] [PubMed]
Loftus, G. R. Harley, E. M. (2005). Why is it easier to identify someone close than far away? Psychonomic Bulletin & Review, 12, 43–65. [PubMed] [CrossRef] [PubMed]
Maunsell, J. H. Ghose, G. M. Assas, J. A. McAdams, C. J. Boudreau, C. E. Noerager, B. D. (1999). Visual response latencies of magnocellular and parvocellular LGN neurons in macaque monkeys. Visual Neuroscience, 16, 1–14. [PubMed] [CrossRef] [PubMed]
Maurer, D. Le Grand, R. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
McCarthy, G. Puce, A. Belger, A. Allision, T. (1999). Electrophysiological studies of human face perception II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex, 9, 431–444. [PubMed] [Article] [CrossRef] [PubMed]
Mermillod, M. Guyader, N. Vuilleumier, P. Alleysson, D. Marendaz, C. (2005). How diagnostic are spatial frequencies for fear recognitionn Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1501–1506). Mahwah, NJ: Lawrence Erlbaum Associates.
Mihaylova, M. Stomonyakov, V. Vassilev, A. (1999). Peripheral and central delay in processing high spatial frequencies: Reaction time and VEP latency studies. Vision Research, 39, 699–705. [PubMed] [CrossRef] [PubMed]
Morris, J. S. Friston, K. J. Buchel, C. Frith, C. D. Young, A. W. Calder, A. J. (1998). A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain, 121, 47–57. [PubMed] [CrossRef] [PubMed]
Morris, J. S. Ohman, A. Dolan, R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences of the United States of America, 96, 1680–1685. [PubMed] [Article] [CrossRef] [PubMed]
Musselwhite, M. J. Jeffreys, D. A. (1985). The influence of spatial frequency on the reaction times and evoked potentials recorded to grating pattern stimuli. Vision Research, 25, 1545–1555. [PubMed] [CrossRef] [PubMed]
Parker, D. M. Costen, N. P. (1999). One extreme or the other or perhaps the golden mean Issues of spatial resolution in face processing. Current Psychology, 18, 118–127. [CrossRef]
Parker, D. M. Lishman, J. R. Hughes, J. (1992). Temporal integration of spatially filtered visual images. Perception, 21, 147–160. [PubMed] [CrossRef] [PubMed]
Parker, D. M. Lishman, J. R. Hughes, J. (1997). Evidence for the view that temporospatial integration in vision is temporally anisotropic. Perception, 26, 169–180. [PubMed] [CrossRef]
Pizzagalli, D. Regard, M. Lehmann, D. (1999). Rapid emotional face processing in the human right and left brain hemispheres: An ERP study. Neuroreport, 10, 2691–2698. [PubMed] [CrossRef] [PubMed]
Pizzagalli, D. A. Lehmann, D. Hendrick, A. M. Regard, M. Pascual-Marqui, R. D. Davidson, R. J. (2002). Affective judgments of faces modulate early activity (approximately 160 ms within the fusiform gyri. Neuroimage, 16, 663–677. [PubMed] [CrossRef] [PubMed]
Pourtois, G. Dan, E. S. Grandjean, D. Sander, D. Vuilleumier, P. (2005). Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: Time course and topographic evoked-potentials mapping. Human Brain Mapping, 26, 65–79. [PubMed] [CrossRef] [PubMed]
Pourtois, G. Grandjean, D. Sander, D. Vuilleumier, P. (2004). Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cerebral Cortex, 14, 619–633. [PubMed] [Article] [CrossRef] [PubMed]
Righart, R. de Gelder, B. (2005). Context influences early perceptual analysis of faces: An electrophysiological study. Cerebral Cortex, 16, 1249–1257. [PubMed] [CrossRef] [PubMed]
Rossion, B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 128, 274–289. [PubMed] [CrossRef] [PubMed]
Rossion, B. Campanella, S. Gomez, C. M. Delinte, A. Debatisse, D. Liard, L. (1999). Task modulation of brain activity related to familiar and unfamiliar face processing: An ERP study. Clinical Neurophysiology, 110, 449–463. [PubMed] [CrossRef] [PubMed]
Rossion, B. Joyce, C. A. Cottrell, G. W. Tarr, M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20, 1609–1624. [PubMed] [CrossRef] [PubMed]
Rotshtein, P. Malach, R. Hadar, U. Graif, M. Hendler, T. (2001). Feeling or features: Different sensitivity to emotion in high-order visual cortex and amygdala. Neuron, 32, 747–757. [PubMed] [Article] [CrossRef] [PubMed]
Schroeder, C. E. Tenke, C. E. Arezzo, J. C. Vaughan, Jr., H. G. (1989). Timing and distribution of flash-evoked activity in the lateral geniculate nucleus of the alert monkey. Brain Research, 477, 183–195. [PubMed] [CrossRef] [PubMed]
Schyns, P. G. Oliva, A. (1999). Dr Angry and Mr Smile: When categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition, 69, 243–265. [PubMed] [CrossRef] [PubMed]
Schyns, P. G. Petro, L. S. Smith, M. L. (2007). Dynamics of visual information integration in the brain for categorizing facial expressions. Current Biology, 17, 1580–1585. [PubMed] [Article] [CrossRef] [PubMed]
Stanislaw, H. Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31, 137–149. [PubMed] [CrossRef]
Stekelenburg, J. J. de Gelder, B. (2004). The neural correlates of perceiving human bodies: An ERP study on the body-inversion effect. Neuroreport, 15, 777–780. [PubMed] [CrossRef] [PubMed]
Streit, M. Wöwler, W. Brinkmeyer, J. Ihl, R. Gaebel, W. (2000). Electrophysiological correlates of emotional and structural face processing in humans. Neuroscience Letters, 278, 13–16. [PubMed] [CrossRef] [PubMed]
Tanskanen, T. Näsänen, R. Montez, T. Päällysaho, J. Hari, R. (2005). Face recognition and cortical responses show similar sensitivity to noise spatial frequency. Cerebral Cortex, 15, 526–534. [PubMed] [Article] [CrossRef] [PubMed]
Tottenham, N. Borscheid, A. Ellertsen, K. Marcus, D. J. Nelson, C. A. (2002). Categorization of facial expressions in children and adults: Establishing a larger stimulus set. Journal of Cognitive Neuroscience, 14,
Vuilleumier, P. Armony, J. L. Driver, J. Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nature Neuroscience, 6, 624–631. [PubMed] [CrossRef] [PubMed]
Vuilleumier, P. Richardson, M. P. Armony, J. L. Driver, J. Dolan, R. J. (2004). Distant influences of amygdala lesion on visual cortical activation during emotional face processing. Nature Neuroscience, 7, 1271–1278. [PubMed] [CrossRef] [PubMed]
Whalen, P. J. Rauch, S. L. Etcoff, N. L. McInerney, S. C. Lee, M. B. Jenike, M. A. (1998). Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. Journal of Neuroscience, 18, 411–418. [PubMed] [Article] [PubMed]
Williams, L. M. Liddell, B. J. Rathjen, J. Brown, K. J. Gray, J. Phillips, M. (2004). Mapping the time course of nonconscious and conscious perception of fear: An integration of central and peripheral measures. Human Brain Mapping, 21, 64–74. [PubMed] [CrossRef] [PubMed]
Williams, L. M. Palmer, D. Lidell, B. J. Song, L. Gordon, E. (2006). The ‘when’ and ‘where’ of perceiving signals of threat versus non-threat. Neuroimage, 31, 458–467. [PubMed] [CrossRef] [PubMed]
Winston, J. S. Vuilleumier, P. Dolan, R. J. (2003). Effects of low-spatial frequency components of fearful faces on fusiform cortex activity. Current Biology, 13, 1824–1829. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
Example of stimuli used in the Non-EQ task in which HSF and LSF images differ in luminance and contrast, and the EQ task in which HSF and LSF stimuli were matched on contrast and luminance.
Figure 1
 
Example of stimuli used in the Non-EQ task in which HSF and LSF images differ in luminance and contrast, and the EQ task in which HSF and LSF stimuli were matched on contrast and luminance.
Figure 2
 
Grand averages for HSF (black) and LSF (gray) fearful (dashed) and neutral (continuous) faces at channel P7/P8 and PO7/PO8. Fearful faces elicit larger amplitudes than neutral faces in the LSF condition only. Furthermore, the topographical distribution of the difference (Neutral minus Fear (160–220 ms)) in ERP activity between fearful and neutral faces is shown for N170. Note that the difference covers the ventro-temporal areas.
Figure 2
 
Grand averages for HSF (black) and LSF (gray) fearful (dashed) and neutral (continuous) faces at channel P7/P8 and PO7/PO8. Fearful faces elicit larger amplitudes than neutral faces in the LSF condition only. Furthermore, the topographical distribution of the difference (Neutral minus Fear (160–220 ms)) in ERP activity between fearful and neutral faces is shown for N170. Note that the difference covers the ventro-temporal areas.
Figure 3
 
Mean reaction times (RT) and D′ (+ SE) for all stimulus conditions in the Non-Equalized (Non-EQ) and Equalized (EQ) task.
Figure 3
 
Mean reaction times (RT) and D′ (+ SE) for all stimulus conditions in the Non-Equalized (Non-EQ) and Equalized (EQ) task.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×