May 2009
Volume 9, Issue 5
Free
Research Article  |   May 2009
A study of N250 event-related brain potential during face and non-face detection tasks
Author Affiliations
  • Shahin Nasr
    School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
    Research Center for Brain and Cognitive Sciences, School of Medicine, Shaheed Beheshti University, Tehran, Iranhttp://www.visionlab.ir/sh_nasr@ipm.ir
  • Hossein Esteky
    School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
    Neuroscience Research Center, School of Medicine, Shaheed Beheshti University, Tehran, Iranhttp://www.visionlab.ir/esteky@ipm.ir
Journal of Vision May 2009, Vol.9, 5. doi:10.1167/9.5.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Shahin Nasr, Hossein Esteky; A study of N250 event-related brain potential during face and non-face detection tasks. Journal of Vision 2009;9(5):5. doi: 10.1167/9.5.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Face perception relies on activation of a complex set of different neural modules. In this study, we assessed the stimulus selectivity of the occipitotemporal N250 ERP component and the possible link between its neural substrates and modules underlying preceding (N170/VPP) and following (P400) category selective ERPs. We recorded N250 during face and leaf detection tasks while we varied stimulus visibility from trial to trial by using a backward masking paradigm. Our results revealed that N250, but not the other tested potentials, was exclusively sensitive to the visibility of faces even when the non-face stimuli served as the task target. We also found a correlation between evoked N170 and N250, in response to face stimuli and to a lesser extent in response to other non-face objects, irrespective of the subjects' task. Besides N250, P400 also showed a strong correlation with N170, but here, the amount of correlation was not affected by stimulus category. Interestingly, despite N250 and N400 correlation with N170, we did not find any correlation between N250 and P400, suggesting that modules underlying these ERP components belong to two different face-processing pathways. We suggest that N250 is initiated by N170 and indexes processes exclusively responsible for encoding faces.

Introduction
Face perception is a cognitive capability in human and non-human primates that influences our social life to a great extent. Evidence from different lines of research has shown that face perception and recognition does not result from single module activation within one time span. Rather, it seems to result from interactions between many complex neural activities that process different aspects of perceived face images (for review, see Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000). In human, previous studies of scalp and intracranial event-related potentials (ERPs) have found different face-specific brain potentials that index various stages of face perception (Allison, Puce, Spencer, & McCarthy, 1999; Bentin, Allison, Puce, Perez, & McCarthy, 1996; Bentin & Deouell, 2000; Itier & Taylor, 2002, 2004; Jeffreys, 1996; Paller, Gonsalves, Grabowecky, Bozic, & Yamada, 2000; Schweinberger, Pfutze, & Sommer, 1995, Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002). Although finding these potentials has revealed the timing and characteristics of their corresponding neural processes, the relationship between these processes is not yet clear. 
Among different brain potentials, the occipitotemporal N170 is one of the earliest ERP components that shows selective activity in response to face images (Bentin et al., 1996; Itier & Taylor, 2004; Jeffreys, 1996). Interestingly, this ERP component, similar to subjects' behavior, could be influenced by manipulating stimulus properties such as orientation. Many studies have shown that N170 shows delayed and enhanced peak activity in response to inverted and/or negative contrast faces (Bentin et al., 1996; Itier & Taylor, 2002; Rossion, Gauthier, Tarr, Despland, Linotte, & Bruyer, 2000). This potential seems to be mostly insensitive to face identity and familiarity (Bentin & Deouell, 2000; Schweinberger et al., 2002, but see also Campanella et al., 2000). However, it is also shown that in prosopagnosic patients N170 amplitude is smaller and less selective compared to controls (Bentin, Deouell, & Soroker, 1999; Eimer & McCarthy, 1999). On the basis of these findings, it is possible that face identification is initiated by the processes linked to N170 and relies on the information prepared by them (Bentin, Golland, Flevaris, Robertson, & Moscovitch, 2006). 
Besides the occipitotemporal N170, the vertex positive potential (VPP) is also a face-selective ERP component that is predominantly recorded from frontal electrode leads at the same time as N170 (Jeffreys, 1993, 1996; Seeck & Grüsser, 1992). VPP characteristics seem to be similar to N170 and its amplitude and latency is correlated with N170, which indicates that similar modules might be responsible for their generation (Itier & Taylor, 2002; Joyce & Rossion, 2005). Despite these similarities, other studies have shown that VPP stimulus selectivity varies to some extent from that of N170 since this component, in contrast to N170, also responds strongly to face images of non-primate animals and some other complex stimuli (Jeffreys & Tukmachi, 1992). This duality is further supported by source localization findings indicating that these two components could be generated in different brain areas (Bötzel, Schulze, & Stodieck, 1995). Whereas neither N170 or VPP show any sensitivity to face identity or familiarity (Bentin & Deouell, 2000; Schweinberger et al., 2002; Tanaka, Curran, Porterfield, & Collin, 2006), it is now accepted that these components index processes responsible for face structural encoding (Bentin & Deouell, 2000; Eimer, 2000; Itier & Taylor, 2002) and probably trigger other processes responsible for identity encoding (Bentin et al., 2006). 
The earliest ERP component that shows strong sensitivity to face identity is the occipitotemporal N250 generated subsequent to N170, peaking 250–300 ms after face stimulus onset, at the same electrode site (Bentin & Deouell, 2000; Schweinberger et al., 1995, 2002; Tanaka et al., 2006). Most of the evidence in favor of N250 sensitivity to face identity has come from studies in which the effect of face familiarity (Bentin & Deouell, 2000; Tanaka et al., 2006) or stimulus repetition and priming (Martens, Schweinberger, Kiefer, & Burton, 2006; Schweinberger, Huddy, & Burton, 2004; Schweinberger et al., 1995, 2002; Trenner, Schweinberger, Jentzsch, & Sommer, 2004) are examined. While in familiarity tests, experiments are based on comparing neural activity in response to familiar/famous faces and unfamiliar ones, the later studies rely on detecting those modules that are involved in stimulus encoding and whose activities vary when the stimulus is presented repeatedly (Henson et al., 2003; Wiggs & Martin, 1998) probably due to repetition suppression (Desimone, 1996). Despite this methodological difference between these studies, Bentin and Deouell (2000) and Schweinberger et al. (2004, 1995, 2002) have shown that N250 is sensitive to the identity of face stimuli and increasing face familiarity or priming can, respectively, increase or decrease N250 amplitude. 
However, due to the task procedures and/or the stimuli used in these studies, two important questions remain unanswered about the nature of the N250 component. First, in contrast to N170 whose selectivity has been well characterized, it is not clear whether N250 represents a face-specific activity similar to N170 or not. None of the previous studies has studied N250 selectivity for face category systematically. As far as we know, the only evidence in favor of face selectivity of N250 comes from studies showing that the N250 priming effect is confined to face stimuli and that repetition of non-face images does not result in any reduction in N250 magnitude (Schweinberger et al., 2004). However, since face identification needs far less attention than identification of other objects (Reddy, Reddy, & Koch, 2006) and there is also an endogenous attention bias toward face images compared to other object categories (Bindemann, Burton, Langton, Schweinberger, & Doherty, 2007; Langton, Law, Burton, & Schweinberger, 2008; Theeuwes & Van der Stigchel, 2006), preferential N250 modulation in response to priming of face object categories could be due to low attentional modulation during priming tasks for non-face objects. Thus it is not clear whether in more demanding conditions, encoding non-face objects also results in similar N250 modulation. 
Second, according to several models of face recognition (Bruce & Young, 1986), the activity (code) generated by structural encoding modules should be used by successive processes responsible for face identity encoding. Although N170 and N250 seem to index the neural processes that are, respectively, responsible for face structural encoding and identification, there is no evidence that supports the link between these two neural processes. Moreover, the relationship between N250 and the following ERP components that are also selective to face identity and familiarity such as P400 (Curran, Tanaka, & Weiskopf, 2002; Paller et al., 2000) is also not yet clear. 
To answer these two questions, we measured N170, VPP, N250, and P400 ERP components in thirteen human subjects when they were performing two alternative tasks of face and leaf detection. Stimuli were selected from face, leaf, and other non-face categories. To assess the selectivity of these ERP components and their relationship with each other, we varied stimulus-driven activity by varying their visibility randomly from trial to trial by using a backward masking paradigm (Breitmeyer & Ogmen, 2000). The result of this experiment showed that N170 and VPP were sensitive to the visibility of face and non-face stimuli while N250 was highly selective to faces and its activity did not vary with visibility of non-face stimuli. In addition, we found strong correlation exclusively between N170 and N250 and not between VPP and N250. N170–N250 correlation was stronger during face compared to non-face object presentation even when these non-face objects served as the task target. We did not find any correlation between N250 and the following P400, suggesting that these components are members of different face processing pathways. 
Methods
Participants
Thirteen right-handed male subjects, aged 22.9 ± 2.8 (mean ± SD) with normal or corrected-to-normal vision and medical or engineering background, were paid to participate in this experiment. Written informed consent in accordance with the principles of the Declaration of Helsinki was obtained from them before the experiments. All of the experiments were also approved by the Shaheed Beheshti Medical University Ethics Committee and the Iranian Society for Physiology and Pharmacology. 
Stimuli
Stimuli were selected from six different object categories (faces, leaves, hands, cars, fruits, and chairs) with 50 stimuli in each category. Face stimuli were all generated by FaceGen modeler ( www.facegen.com), while other stimuli were selected from the commercially available Hemera photo-object set. In order to reduce the inter-stimulus variability in each category, all stimuli were selected from front views except for cars that were all shown from a side view. Stimuli were gray scaled and iso-luminaced and were presented on a gray background (14.8 cd/m 2) of a 19″ monitor (LG F900P) with 100-Hz refresh rate using Matlab Psychophysics ToolBox. The size of stimuli was adjusted so that they had the same length along their longest dimension (7.3 degrees of visual angle at 70-cm distance). Noise stimuli, used for masking, were randomly generated for each trial. They were also gray scaled but their size was larger than that of the other stimuli, subtending 10 × 10 degrees of visual angle. 
Procedure and task
Subjects participated in two different experiments: (1) “Face Detection” and (2) “Leaf Detection.” The subjects' task was predefined from the beginning of the blocks and it did not change within a block. The task sequence was counterbalanced between the subjects and each subject participated in both tasks in one recording session. 
The stimulus presentation method was exactly the same for the two experiments. Stimuli were presented for a very short period (10 ms) at the center of the screen. Each stimulus was followed by a noise mask that remained visible for 300 ms. There was a 1500 ± 100 ms interval between the mask offset and the start of the next trial. The stimulus was selected pseudorandomly on each trial, with the constraint that on 33.3% of trials it was selected from the face set, on 33.3% of trials it was selected from the leaf set, and in the rest of the trials it was selected from the other four object categories. The presentation probability of a target stimulus (i.e., faces in the face detection task and leaves in the leaf detection task) was 33.3% in each corresponding block and chance levels for target detection and distracter rejection were 33.3% and 66.6%, respectively. Stimulus onset asynchrony (SOA) between the presented stimulus and the following mask was also selected randomly on each trial and could be either 0 (i.e., mask-only trials), 10, 20, 30, or 500 ms (with 20% probability for each SOA). 
During the two tasks, the subjects were required to fixate on the center of screen while their eye movements were monitored by EOG signals (see below). They were instructed to report whether the presented stimulus was a target or a distracter by pressing one of two keys on a key pad (i.e., two-alternative forced-choice task). Accuracy and speed were both stressed. In both tasks, subjects participated in 1200 trials in 20 blocks for each task. 
Recording and data analysis
EEG recordings were made by using a Neuroscan system with 32 Ag/AgCl sintered electrodes mounted on an elastic cap. Data were acquired continuously in AC mode (0.05–30 Hz) with a 1-kHz sampling rate. Reference electrodes were linked mastoids, grounded to AFz. Four electrodes monitored horizontal and vertical eye movements for offline artifact rejection. Channel impedance was kept at <5 kΩ. Data were resampled offline at a 250-Hz sampling rate. Baseline activity was corrected on a pre-sample stimulus interval of 100 ms. A separate analysis was applied in order to eliminate those trials with eye movements and eye blinks by detecting those trials on which the peak-to-peak voltage in the horizontal and vertical eye movement channels exceeded 30 μV. 
We measured the occipitotemporal N170 (focused primarily over P7 and P8 electrode sites) brain potential by measuring the ERP peak activity during 150–250 ms after the stimulus onset for each subject. In contrast to previous studies, we found the VPP component more strongly in FP1 and FP2 electrodes rather than more posterior sites such as Fz (see, e.g., Itier & Taylor, 2004) and Cz (see, e.g., Joyce & Rossion, 2005). N250 and post-VPP activities were recorded at the same electrode sites as N170 and VPP, respectively. P400 was mainly detected at central electrode sites (C3, Cz, and C4). Because the last three components were sustained potentials, peak amplitude was a poor measure of these activities and we preferred to measure the area under the curve during 250–350 ms and 350–500 ms after the stimulus onset to estimate these N250/post-VPP and P400 components for individual subjects. 
Results
Behavioral results
Manipulation of stimulus visibility affected subjects' performance in both face and leaf detection tasks ( Table 1). In the face detection task, application of one-factor repeated measures ANOVA (SOA (10 vs. 20 vs. 30. vs. 500 ms)) and subsequent Greenhouse–Geisser correction for sphericity showed that subjects' hit rate (HR) decreased significantly from 87.4% (±21.7; mean ± SD) to 33.0% (±25.9) when the SOA was decreased from 500 ms to 10 ms ( F(1.39, 16.64) = 19.06, p < 0.001). Subjects' Correct Rejection Rate (CRR) for objects other than the face and leaf categories also decreased significantly from 95.9% (±7.1) to 85.2% (±10.0) with the same change in SOA ( F(1.64, 19.67) = 14.08, p < 0.001). Subjects showed similar behavior during leaf detection task. In leaf detection blocks, HR values declined significantly ( F(1.60, 19.20) = 40.75, p < 0.001) from 93.1% (±7.3) to 47.9% (±25.7) and the CRR values declined significantly ( F(1.87, 22.41) = 22.10, p < 0.001) from 93.7% (±8.9) to 74.6% (±12.7) when SOA decreased from 500 ms to 10 ms. In mask-only trials (i.e., SOA = 0 ms), subjects reported the target presentation in 19.7% (±15.2) and 28.7% (±20.0) of face and leaf detection trials, respectively (chance level was equal to 33.3%). 
Table 1
 
Subjects' hit rate and correct rejection rate during face and leaf detection tasks.
Table 1
 
Subjects' hit rate and correct rejection rate during face and leaf detection tasks.
SOA Face detection task Leaf detection task
10 ms 20 ms 30 ms 500 ms 10 ms 20 ms 30 ms 500 ms
Hit rate (mean ± ( SD)) 33.0 (25.9) 53.7 (23.9) 80.3 (15.9) 87.4 (21.7) 47.9 (25.7) 61.8 (25.5) 71.94 (22.5) 93.1 (7.3)
Correct rejection rate (mean ± ( SD)) 85.2 (10.1) 87.7 (9.5) 92.5 (8.7) 95.9 (7.1) 74.6 (12.7) 80.2 (12.9) 87.6 (7.8) 93.7 (8.9)
ERP results
We detected N250 activity in occipitotemporal leads (mainly focused on P7 and P8 leads), which peaked around 300 ms after stimulus onset ( Figure 1). We found that N250 activity (250–350 ms after the stimulus onset) was sensitive to face stimulus visibility but not the visibility of other stimulus categories. Here, decreasing the SOA between face stimuli and mask resulted in smaller (more positive) N250 activity and mask-only trials showed the most positive N250 activity (Figures 1 and 2a). In contrast to previous studies, which reported that N250 is stronger over the right hemisphere (e.g., Schweinberger et al., 2004), here we found that N250 tended to be more strongly modulated by face visibility over the left hemisphere. However, application of three-factor repeated measures ANOVA (stimulus category (face vs. leaf vs. other objects) × SOA (10 vs. 20 vs. 30 vs. 500 ms) × hemisphere (right vs. left)) did not yield any significant effect of hemisphere (F(1, 12) = 0.89, p > 0.30), hemisphere × stimulus category interaction (F(1.50, 17.95) = 1.67, p > 0.20), hemisphere × SOA interaction (F(1.75, 21.04) = 1.58, p > 0.20), or hemisphere × stimulus category × SOA (F(2.53, 30.31) = 0.488, p < 0.815). Rather, we found a significant effect of stimulus category (F(1.68, 20.19) = 4.04, p < 0.05) on the measured N250. We did not find any significant effect of SOA on the measured N250 (F(1.77, 21.32) = 0.66, p > 0.50), but we found a significant SOA × stimulus category interaction (F(3.33, 39.96) = 3.61, p < 0.05). In this test and the following analysis, p-values and degrees of freedom were corrected for sphericity by using Greenhouse–Geisser method. 
Figure 1
 
ERP activities in left and right occipitotemporal leads in response to face and leaf stimulus categories. Shaded areas demonstrate interval used for N250 measurement.
Figure 1
 
ERP activities in left and right occipitotemporal leads in response to face and leaf stimulus categories. Shaded areas demonstrate interval used for N250 measurement.
Figure 2
 
The amount of measured N250 activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 2
 
The amount of measured N250 activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
To increase signal-to-noise ratio and because there was no significant laterality effect, we averaged N250 activity recorded from the left and right hemispheres and eliminated this factor from the rest of the analysis. We assessed the effect of SOA (10 vs. 20 vs. 30 vs. 500 ms) on N250 in response to each individual category separately. Application of one-factor repeated measures ANOVA to the face-related N250 yielded a significant effect of SOA ( F(2.36, 28.27) = 3.58, p < 0.05), while leaf ( F(2.084, 25.01) = 0.13, p > 0.80) and other object categories ( F(1.79, 21.48) = 0.48, p > 0.60) were not significantly affected by stimulus visibility. 
We also checked the effect of stimulus category (face vs. leaf vs. other objects) across different SOAs. Interestingly, the most robust difference between N250 elicited by face and non-face stimuli was detected during those trials with very short SOAs (SOA = 10 ms). Here, application of a one-factor repeated measures ANOVA yielded a significant effect of stimulus categories ( F(1.63, 19.57) = 6.14, p < 0.05) limited to the trials with short SOA (SOA = 10 ms). Application of the same analysis to N250 during other SOAs, even when SOA = 500 ms and stimuli were highly visible, did not show any significant effect of stimulus category ( F < 3.3, p > 0.05). 
According to these results, N250 seems to index a face-selective process because it was only sensitive to the visibility of face stimuli and not those stimuli selected from other non-face categories. However, it is still possible that other object categories could also elicit N250 when they served as the task target, which would mean that N250 indexes target-related processes rather than face-selective ones. To rule out this possibility, we assessed N250 during the leaf detection blocks. Note that the presentation procedure was exactly the same for both face and leaf detection experiments and that the only difference was the category of the task target (see Methods section). Assessing N250 during leaf detection trials, we found that, even in these trials, the occipitotemporal N250 remained sensitive to face visibility while leaf visibility level did not affect N250 at all ( Figure 2b). Here, application of three-factor repeated measures ANOVA (SOA × stimulus category × hemisphere) yielded similar significant SOA × stimulus category interaction ( F(2.45, 29.43) = 5.41, p < 0.01) as we saw during face detection trials and a marginally significant effect of stimulus category ( F(1.77, 21.32) = 3.23, p = 0.065) while the other factors and interactions remained non-significant ( p > 0.10). Follow-up applications of one-factor repeated measures ANOVA (SOA) to the N250 response to each individual category yielded a significant effect of SOA on face-related N250 ( F(1.27, 15.2) = 5.45, p < 0.05). More importantly, there was still no effect of stimulus visibility on N250 in response to leaf ( F(1.87, 22.39) = 1.00, p > 0.30) and other object ( F(1.63, 19.61) = 0.11, p > 0.80) categories. 
Although these results show that N250 indexes a highly face-selective process, the relationship between this component and other preceding (i.e., VPP and N170) and following (i.e., P400) ERP components was still not clear. Consistent with many previous studies (e.g., see Itier & Taylor, 2004), we recorded N170 in occipitotemporal (mainly focused on P7 and P8 electrodes) scalp areas (Figure 3a), around 200 ms after stimulus onset (see Methods section). In addition, similar to previous studies (Joyce & Rossion, 2005), the VPP was detected in the central, frontal, and prefrontal scalp areas; however, our brain activity mapping (Figure 3b) showed that the maximum amplitude was detected at FP1 and FP2 electrodes rather than Cz (Joyce & Rossion, 2005) or Fz (Itier & Taylor, 2004) sites. In contrast to N250, VPP and N170 sensitivity to stimuli visibility was not confined to face stimuli and was observed in response to leaf, face, and other stimulus categories (Figures 4a and 4b). For N170 and VPP, application of a two-factor repeated measures ANOVA (SOA × stimulus category) yielded a significant effect of SOA during both face and leaf detection tasks (F > 8.00, p < 0.01). Application of the same analysis to the VPP recorded in more posterior sites yielded similar results, but since the VPP was stronger in prefrontal areas we only used these potentials in the subsequent analysis. 
Figure 3
 
Exemplar ERP activities in (a) occipitotemporal (P7 and P8), (b) frontal (FP1 and FP2), and (c) central (C3, Cz, and C4) leads in response to stimuli with variable visibility. Brain activity mappings show activity distribution during (top) N170, (middle) VPP, and (bottom) P400 generation. In each row, arrows point to the recording sites. Shaded areas in top, middle, and bottom graphs demonstrate intervals used for measuring N250, post-VPP, and P400 activities, respectively.
Figure 3
 
Exemplar ERP activities in (a) occipitotemporal (P7 and P8), (b) frontal (FP1 and FP2), and (c) central (C3, Cz, and C4) leads in response to stimuli with variable visibility. Brain activity mappings show activity distribution during (top) N170, (middle) VPP, and (bottom) P400 generation. In each row, arrows point to the recording sites. Shaded areas in top, middle, and bottom graphs demonstrate intervals used for measuring N250, post-VPP, and P400 activities, respectively.
Figure 4
 
Bar graphs depict the amount of measured (a) N170, (b) VPP, and (c) P400 activities during (right column) face and (left column) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 4
 
Bar graphs depict the amount of measured (a) N170, (b) VPP, and (c) P400 activities during (right column) face and (left column) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
P400 was recorded subsequent to N250 at central sites (mainly focused on C3, Cz, and C4 electrodes) around 400 ms after stimulus onset ( Figure 3c). Similar to N170 and VPP components, and again in contrast to N250, P400 SOA sensitivity was observed during both tasks no matter whether the stimuli were selected from the target or distracter categories ( Figure 4c). Here again application of a two-factor repeated measures ANOVA (SOA × stimulus category) yielded a significant effect of stimulus visibility during both face and leaf ( F > 36.51, p < 0.001) detection trials. For these ERP components, we also found a significant effect of stimulus categories, in favor of face stimuli, during face detection trials ( F > 4.60, p < 0.05). For N170 and VPP, this effect mostly vanished during leaf detection trials ( F < 2.10, p > 0.10) and it was reversed in favor of leaf stimuli for P400 ( F(1.85, 22.15) = 7.27, p < 0.01). 
To assess the possible relationship between N250 and preceding N170 and VPP components, we measured the amount of correlation between these potentials during the two tasks and in response to stimuli with different levels of visibility ( Figure 5). Interestingly, we found a tight correlation between N170 and N250 face-related responses during both face ( r = 0.757, p < 10 −10) and leaf ( r = 0.645, p < 10 −6) detection trials ( Figure 5a, right). During both tasks, correlations between N170 and N250 activities in responses to leaf ( r < 0.329, p < 0.05) and other object categories ( r > 0.39, p < 0.01) were also significant ( Figures 5b and 5c, right). However, a Pearson test for comparing two correlation coefficients showed that these correlations were significantly weaker compared to the correlation between face-related N170 and N250 activities ( p < 0.05). We did not find any significant correlation between VPP and N250 activities (∣ r∣ < 0.22, p > 0.1) in response to different stimulus categories ( Figures 5a5c, left). Thus, on the basis of these results, one may conclude that N170 and N250 activities were likely to be linked to each other. 
Figure 5
 
Scatter plots demonstrate the amount of correlation between N250 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. In each graph, circle and square dots represent individual subjects' brain activities during one specific SOA condition in face and leaf detection tasks, respectively. Solid and dashed lines also represent the best fitted regression lines for the activities during face and leaf detection tasks, respectively. Printed numbers show the amount of correlation during (top) face and (bottom) leaf detection tasks.
Figure 5
 
Scatter plots demonstrate the amount of correlation between N250 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. In each graph, circle and square dots represent individual subjects' brain activities during one specific SOA condition in face and leaf detection tasks, respectively. Solid and dashed lines also represent the best fitted regression lines for the activities during face and leaf detection tasks, respectively. Printed numbers show the amount of correlation during (top) face and (bottom) leaf detection tasks.
We similarly checked the amount of correlation between the N250 and the following P400 components to see whether their underlying modules were related to each other or not ( Figure 6). Application of the same analysis, as mentioned in the previous paragraph, to N250 and P400 components did not show any significant correlation between face-related N250 and P400 activities during face and leaf detection tasks (∣ r∣ < 0.24, p > 0.1). However, interestingly, P400 showed a significant correlation with N170 ( r < −0.40, p < 0.01) and VPP ( r > 0.49, p < 0.01) components during both face and leaf detection trials and in response to all stimuli categories ( Figure 7). Here, 400 tended to be more strongly correlated to VPP rather than N170 but a comparison of correlation coefficients did not yield any significant differences ( p > 0.05). 
Figure 6
 
The amount of correlation between N250 activity and P400 activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 6
 
The amount of correlation between N250 activity and P400 activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 7
 
The amount of correlation between P400 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 7
 
The amount of correlation between P400 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
We also repeated these correlation analyses for normalized N170, VPP, N250, and P400 activities. To do so, each ERP component recorded from individual subjects was separately normalized using a linear transformation (Normalized Activity = [Activity − Min] / [Max − Min]). Using this transformation, the maximum activity recorded during the experimental conditions became equal to 1 while the minimum activity became equal to 0. A repetition of the correlation analysis yielded a very similar result as mentioned in the previous two paragraphs. Here again, face-related, normalized N250 activities were significantly correlated to corresponding N170 activities during both face and leaf detection tasks while leaf and other object-related activities showed weaker correlation in total ( Table 2). Normalized P400 was not correlated to the corresponding N250 values ( Table 2), but it showed significant correlation with VPP and to less extent to N170 ( Table 3). 
Table 2
 
Correlation coefficients measured between normalized N250 and other category-selective ERP components.
Table 2
 
Correlation coefficients measured between normalized N250 and other category-selective ERP components.
Face detection Leaf detection
Face Leaf Others Face Leaf Others
N170 0.475** 0.384** 0.268 0.658** 0.210 0.367**
VPP −0.207 −0.285* 0.126 0.178 −0.197 0.177
P400 −0.139 0.317* 0.259 0.240 0.190 0.188
 

*Correlation is significant at the 0.05 level.

 

**Correlation is significant at the <0.01 level.

Table 3
 
Correlation coefficients measured between normalized P400 and preceding N170 and VPP components.
Table 3
 
Correlation coefficients measured between normalized P400 and preceding N170 and VPP components.
Face detection Leaf detection
Face Leaf Others Face Leaf Others
N170 −0.511** −0.127 −0.193 −0.384** −0.409** −0.067
VPP 0.850** 0.772** 0.685** 0.708** 0.669** 0.784**
 

*Correlation is significant at the 0.05 level.

 

**Correlation is significant at the <0.01 level.

Finally, we checked whether the same ERP component similar to N250 could be detected after the VPP components. Since previous studies have suggested that N170 and VPP components could be generated by the same neural modules, we checked whether we could find the same ERP component as N250 after VPP or not. We measured frontal activity during 250–350 ms after the stimulus onset, at FP1 and FP2 electrode sites where we measured VPP, and we called it post-VPP activity (see Methods section). Post-VPP activity, in contrast to N170, VPP, and N250 brain potentials, was not systematically affected by stimulus visibility and the only prominent difference was among ERP trials in which SOA = 500 ms, relative to the rest of the trials ( Figures 3b and 8). During both face detection and leaf detection tasks, application of a two-factor repeated measures ANOVA (stimulus category (face vs. leaf vs. other objects) × SOA (10 vs. 20 vs. 30 vs. 500 ms)) yielded a significant effects of stimulus visibility ( F > 4.78, p < 0.05) on post-VPP activities without any significant interaction between the two factors ( F < 2.65, p > 0.10). This effect was mainly due to an increase in the amount of ERPs during trials with SOA = 500 ms relative to the rest of the trials with shorter SOAs and was similarly observed in response to face and non-face categories. This test also yielded a significant effect of stimulus category ( F > 6.56, p < 0.05) during both tasks. However, here in contrast to N250, which showed the effect of stimulus category when SOA = 10 ms, application of one-factor repeated measures ANOVA (stimulus categories) yielded that, during both tasks, this effect was limited to trials with SOA = 500 ms ( F > 6.61, p < 0.05) and could not be observed during other trials ( F < 2.50, p > 0.10). Thus, N250 and post-VPP potentials in contrast to N170 and VPP components seem to be generated by two different sources. 
Figure 8
 
The amount of measured post-VPP activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 8
 
The amount of measured post-VPP activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Discussion
In this study, we assessed stimulus selectivity of the N250 component and its relation with the preceding and following category-selective ERP components (i.e., N170, VPP, and P400). Our results indicated that N250 was sensitive to the visibility of face stimuli without showing the same sensitivity in response to stimuli selected from non-face categories. This lack of sensitivity was not confined to face detection trials and was similarly observed during the leaf detection task and in response to task target category (i.e., leaves). Moreover, we found a correlation between N170 and N250, more strongly in response to face stimuli, during both face and leaf detection trials, but we failed to find the same correlation between VPP and N250 or between N250 and P400 activities. 
As we mentioned in the Introduction section, previous studies have usually used stimulus repetition and priming to assess the N250 component. These procedures result in a modulated N250 compared to when the same stimulus is presented alone (Schweinberger et al., 1995, 2002) probably due to stimulus suppression and more selective activity in response to repeated stimuli (Desimone, 1996). As far as we know, only one previous study has directly assessed the selectivity of the N250 component, and in that study, the authors have shown that face-related N250 is affected by repetition of face stimuli while repetition of stimuli selected from other non-face categories does not affect the N250 component (Schweinberger et al., 2004). In our study, by using stimuli whose visibility varied from trial to trial, we have shown that N250 responses to face and non-face stimuli are significantly distinguishable when stimuli are poorly visible due to the presence of more positive face-related N250 activity. Moreover, while face-related N250 magnitude becomes more negative as we increased face stimulus visibility, N250 magnitude did not change significantly in response to non-face objects and the difference between face and objects-related N250 gradually diminished. This phenomenon explains why none of the previous studies have found any evidence in favor of N250 face selectivity in non-priming procedures when stimuli are highly visible and a priming procedure is not used (e.g., see Bentin et al., 1996). 
The lack of N250 sensitivity to non-face objects visibility is an intriguing phenomenon. Although N170 and VPP were both face selective, they also showed a significant response to other stimulus categories and their peak amplitudes varied with non-face stimulus visibility. Interestingly, intracranial studies (e.g., Allison et al., 1999) have shown that N200 in response to face and non-face stimuli is generated by different competing modules whose activities inhibit their counterparts (Allison, Puce, & McCarthy, 2002). However, here N250 seemed to be only sensitive to face stimuli and even task switching, which increased leaf relevance and the corresponding N170 activity, did not increase N250 sensitivity to the visibility of non-face objects. Thus, it seems that N250 is indexing a highly face-specific process not applicable to other object categories even when they serve as the task target. 
Moreover, our results indicated that the N250 sensitivity to the level of face visibility is preserved even when attention is directed to other stimulus categories. Consistent to this finding, previous studies have shown that the N250 sensitivity to immediate famous face repetition remains intact even when attention is recruited by a task of letter detection (Neumann & Schweinberger, 2008). This phenomenon could be due to endogenous attention biased toward face stimuli even when they are task irrelevant (Bindemann et al., 2007; Langton et al., 2008; Theeuwes & Van der Stigchel, 2006) or the existence of face-specific attentional resources (Bindemann, Burton, & Jenkins, 2005; Jenkins, Lavie, & Driver, 2003). 
Despite this evidence for face selectivity of N250, the nature of processes underlying this component is not yet clear. Because previous studies have suggested that N250 is sensitive to face identity and identification-related processes (Schweinberger et al., 1995, 2002; Tanaka et al., 2006), it is possible that this ERP component is indexing those information processes necessary for encoding face identity but not other non-face stimuli. For example, previous psychophysical studies (Diamond & Carey, 1986; Maurer, Le Grand, & Mondloch, 2002) have shown that two forms of configural information are necessary for face recognition: first-order relational information that refers to feature (e.g., eyes, nose, etc.) position in a face context. Whereas N170 is sensitive to face internal features (Bentin et al., 1996) and its latency is delayed by their manipulation and reorganization relative to face contours (Bentin et al., 1996; Eimer, 2000; Zion-Golumbic & Bentin, 2007), it is possible that this component is indexing first-order relational information (Bentin et al., 2006). Nevertheless, other stimulus categories are also constructed from features and there might be counterpart N170 modules that are responsible for encoding first-order relational information for these objects. There is also second-order relational information, which refers to highly parameterized spatial relations between face features (Diamond & Carey, 1986; Leder & Bruce, 2000; Maurer et al., 2002). While first-order relational information is necessary for recognition of face and non-face images, second-order relational information seems to be specific for face images (Tanaka, 2001; Tanaka & Sengco, 1997). Our results suggest that N250 indexes those processes responsible for the encoding of second-order relational information. The correlation between N170 and N250 points to the notion that these processes are triggered by preceding N170 activities, which are responsible for first-order configuration encoding (Bentin et al., 2006). This notion is also supported by some previous studies that showed N170 degeneration in prosopagnosic patients with severe face recognition impairments (Bentin et al., 1999; Eimer & McCarthy, 1999). 
The other point here is that according to Bruce and Young's (1986) models for face recognition, activity (code) generated by the units responsible for structural encoding (N170 or VPP) should be streamed into the modules and processes responsible for stimulus identification. Consistent with this model, we found a strong correlation between modules underlying N170 and N250 components and this correlation was much stronger during face stimulus presentation. Although this correlation does not necessarily prove an effective link between these two processes, it strengthens the possibility that these two modules are directly and/or indirectly connected to each other (Friston, 1994). 
An alternative view for this correlation between N170 and N250, other than direct/indirect relationship between their neural substrates, is that this correlation could have resulted from N170 signal propagation through time. N170 and N250 are successively recorded from occipitotemporal electrode site and since it takes time for brain potentials to go back to baseline (Itier & Taylor, 2004), it is possible that N170 activity has contaminated the following N250 potential, which leads to significant correlation between the two factors. However, if this signal propagation is responsible for the correlation between N170 and N250 components, we expect to see the same correlation between VPP and post-VPP activities. However, we did not find any activity similar to N250 following VPP at all and despite the similarities between N170 and VPP, N250 and P250 characteristics differ from each other. Moreover, we found a significantly stronger correlation between N170 and N250 activities in response to face stimuli compared to other stimulus categories. Because other stimulus categories also generated strong N170 components, especially during leaf detection task, a weaker correlation between object-related N170 and N250 potentials indicates that signal propagation by itself is not responsible for the correlation between N170 and N250 components. 
Our data also provide evidence in favor of the existence of two different lines of face processing, which are consisted, respectively, of N170–N250 and N170/VPP–P400 components. The existence of different face processing pathways is highly consistent with models of face processing (Bruce & Young, 1986). Consistent with this model, we found a common node between the two pathways (i.e., N170), which is responsible for stimuli structural encoding (Bentin et al., 1996; Bentin & Deouell, 2000; Eimer, 2000) and following components (i.e., N250 and P400) show significant correlation with activities generated in this node. Interestingly, N250 and P400 seem not to be linked to each other, which points to the notion that these processing pathways are acting independently at least during category discrimination tasks. 
In conclusion, our data suggest that N250 is a highly face-selective ERP component whose activity seems to index those processes necessary for processing of faces and not other non-face objects. Our results also support the relationship between those modules underlying face-related N170 and following N250 and P400 activities while do not show any evidence in favor of link between modules underlying N250 and P400. These independent links are consistent to a model of face recognition (Bruce & Young, 1986), which predicts different functional connections and pathways between modules responsible for structural encoding (indexed by N170) and other modules responsible for further face processing. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Dr. Hossein Esteky. 
Email: esteky@ipm.ir. 
Address: Institute for Research in Fundamental Sciences (IPM), the School of Cognitive Sciences, Niavaran, P.O. Box 19395-5746, Tehran, Iran. 
References
Allison, T. Puce, A. Spencer, D. D. McCarthy, G. (1999). Electrophysiological studies of human face perception I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9, 415–430. [PubMed] [Article] [CrossRef] [PubMed]
Allison, T. Puce, A. McCarthy, G. (2002). Category-sensitive excitatory and inhibitory processes in human extrastriate cortex. Journal of Neurophysiology, 88, 2864–2868. [PubMed] [Article] [CrossRef] [PubMed]
Bentin, S. Allison, T. Puce, A. Perez, A. McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. [CrossRef] [PubMed]
Bentin, S. Deouell, L. Y. Soroker, N. (1999). Selective visual streaming in face recognition: Evidence from developmental prosopagnosia. Neuroreport, 10, 823–827. [PubMed] [CrossRef] [PubMed]
Bentin, S. Deouell, L. Y. (2000). Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology, 17, 35–55. [CrossRef] [PubMed]
Bentin, S. Golland, Y. Flevaris, A. Robertson, L. C. Moscovitch, M. (2006). Processing the trees and the forest during initial stages of face perception: Electrophysiological evidence. Journal of Cognitive Neuroscience, 18, 1406–1421. [PubMed] [CrossRef] [PubMed]
Bindemann, M. Burton, A. M. Jenkins, R. (2005). Capacity limits for face processing. Cognition, 98, 177–197. [PubMed] [CrossRef] [PubMed]
Bindemann, M. Burton, A. M. Langton, S. R. H. Schweinberger, S. R. Doherty, M. J. (2007). The control of attention to faces. Journal of Vision, 7, (10):15, 1–8, http://journalofvision.org/7/10/15/, doi:10.1167/7.10.15. [PubMed] [Article] [CrossRef] [PubMed]
Bötzel, K. Schulze, S. Stodieck, R. G. (1995). Scalp topography and analysis of intracranial sources of face-evoked potentials. Experimental Brain Research, 104, 135–143. [PubMed] [CrossRef] [PubMed]
Breitmeyer, B. G. Ogmen, H. (2000). Recent models and findings in visual backward masking: A comparison, review, and update. Perception & Psychophysics, 62, 1572–1595. [PubMed] [CrossRef] [PubMed]
Bruce, V. Young, A. (1986). Understanding face recognition. British Journal of Medical Psychology, 77, 305–327. [PubMed] [CrossRef]
Campanella, S. Hanoteau, C. Dépy, D. Rossion, B. Bruyer, R. Crommelinck, M. (2000). Right N170 modulation in a face discrimination task: An account for categorical perception of familiar faces. Psychophysiology, 37, 796–806. [PubMed] [CrossRef] [PubMed]
Curran, T. Tanaka, J. W. Weiskopf, D. M. (2002). An electrophysiological comparison of visual categorization and recognition memory. Cognitive, Affective & Behavioral Neuroscience, 2, 1–18. [PubMed] [Article] [CrossRef] [PubMed]
Desimone, R. (1996). Neural mechanisms for visual memory and their role in attention. Proceedings of the National Academy of Sciences of the United States of America, 93, 13494–13499. [PubMed] [Article] [CrossRef] [PubMed]
Diamond, R. Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. [PubMed] [CrossRef] [PubMed]
Eimer, M. McCarthy, R. A. (1999). Prosopagnosia and structural encoding of faces: Evidence from event-related potentials. Neuroreport, 10, 255–259. [PubMed] [CrossRef] [PubMed]
Eimer, M. (2000). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport, 11, 2319–2324. [PubMed] [CrossRef] [PubMed]
Friston, K. J. (1994). Functional and effective connectivity in neuroimaging: A synthesis. Human Brain Mapping, 2, 56–78. [CrossRef]
Haxby, J. V. Hoffman, E. A. Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233. [PubMed] [CrossRef] [PubMed]
Henson, R. N. Goshen-Gottstein, Y. Ganel, T. Otten, L. J. Quayle, A. Rugg, M. D. (2003). Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cerebral Cortex, 13, 793–805. [PubMed] [Article] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2002). Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. Neuroimage, 15, 353–372. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2004). N170 or N1 Spatiotemporal differences between object and face processing using ERPs. Cerebral Cortex, 14, 132–142. [PubMed] [Article] [CrossRef] [PubMed]
Jeffreys, D. A. Tukmachi, E. S. (1992). The vertex‐positive scalp potential evoked by faces and by objects. Experimental Brain Research, 91, 340–350. [PubMed] [PubMed]
Jeffreys, D. A. (1993). The influence of stimulus orientation on the vertex positive scalp potential evoked by faces. Experimental Brain Research, 96, 163–172. [PubMed] [CrossRef] [PubMed]
Jeffreys, D. A. (1996). Evoked potential studies of face and object processing. Visual Cognition, 3, 1–38. [CrossRef]
Jenkins, R. Lavie, N. Driver, J. (2003). Ignoring famous faces: Category-specific dilution of distractor interference. Perception & Psychophysics, 65, 298–309. [PubMed] [Article] [CrossRef] [PubMed]
Joyce, C. Rossion, B. (2005). The face-sensitive N170 and VPP components manifest the same brain processes: The effect of reference electrode site. Clinical Neurophysiology, 116, 2613–2631. [PubMed] [CrossRef] [PubMed]
Langton, S. R. H. Law, A. S. Burton, A. M. Schweinberger, S. R. (2008). Attention capture by faces. Cognition, 107, 330–342. [PubMed] [CrossRef] [PubMed]
Leder, H. Bruce, V. (2000). When inverted faces are recognized: The role of configural information in face recognition. Quarterly Journal of Experimental Psychology, 53, 513–536. [PubMed] [CrossRef] [PubMed]
Martens, U. Schweinberger, S. R. Kiefer, M. Burton, A. M. (2006). Masked and unmasked electrophysiological repetition effects of famous faces. Brain Research, 1109, 146–157. [PubMed] [CrossRef] [PubMed]
Maurer, D. Le Grand, R. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
Neumann, M. F. Schweinberger, S. R. (2008). N250r and N400 ERP correlates of immediate famous face repetition are independent of perceptual load. Brain Research, 1239, 181–190. [PubMed] [CrossRef] [PubMed]
Paller, K. A. Gonsalves, B. Grabowecky, M. Bozic, V. S. Yamada, S. (2000). Electrophysiological correlates of recollecting faces of known and unknown individuals. Neuroimage, 11, 98–110. [PubMed] [CrossRef] [PubMed]
Reddy, L. Reddy, L. Koch, C. (2006). Face identification in the near‐absence of focal attention. Vision Research, 46, 2336–2343. [PubMed] [CrossRef] [PubMed]
Rossion, B. Gauthier, I. Tarr, M. J. Despland, P. A. Bruyer, R. Linotte, S. (2000). The N170 occipito‐temporal component is delayed and enhanced to inverted faces but not to inverted objects: An electrophysiological account of face-specific processes in the human brain. Neuroreport, 11, 69–74. [PubMed] [CrossRef] [PubMed]
Seeck, M. Grüsser, O. J. (1992). Category-related components in visual evoked potentials: Photographs of faces, persons, flowers, and tools as stimuli. Experimental Brain Research, 92, 338–349. [PubMed] [CrossRef] [PubMed]
Schweinberger, S. R. Pfutze, E. M. Sommer, W. (1995). Repetition priming and associative priming of face recognition: Evidence from event-related potential. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 722–736. [CrossRef]
Schweinberger, S. R. Pickering, E. C. Jentzsch, I. Burton, A. M. Kaufmann, J. M. (2002). Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398–409. [PubMed] [CrossRef] [PubMed]
Schweinberger, S. R. Huddy, V. Burton, A. M. (2004). N250r: A face-selective brain response to stimulus repetitions. Neuroreport, 15, 1501–1505. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Sengco, J. A. (1997). Features and their configuration in face recognition. Memory & Cognition, 25, 583–592. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. (2001). The entry point of face recognition: Evidence for face expertise. Journal of Experimental Psychology: General, 130, 534–543. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Curran, T. Porterfield, A. L. Collin, D. (2006). Activation of preexisting and acquired face representations: The N250 event-related potentials as an index of face familiarity. Journal of Cognitive Neuroscience, 18, 1488–1497. [PubMed] [CrossRef] [PubMed]
Theeuwes, J. Van der Stigchel, S. (2006). Faces capture attention: Evidence from inhibition of return. Visual Cognition, 13, 657–665. [CrossRef]
Trenner, M. U. Schweinberger, S. R. Jentzsch, I. Sommer, W. (2004). Face repetition effects in direct and indirect tasks: An event-related brain potentials study. Cognitive Brain Research, 21, 388–400. [PubMed] [CrossRef] [PubMed]
Wiggs, C. L. Martin, A. (1998). Properties and mechanisms of perceptual priming. Current Opinion in Neurobiology, 8, 227–233. [PubMed] [CrossRef] [PubMed]
Zion-Golumbic, E. Bentin, S. (2007). Dissociated neural mechanisms for face detection and configural encoding: Evidence from N170 and induced gamma-band oscillation effects. Cerebral Cortex, 17, 1741–1749. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
ERP activities in left and right occipitotemporal leads in response to face and leaf stimulus categories. Shaded areas demonstrate interval used for N250 measurement.
Figure 1
 
ERP activities in left and right occipitotemporal leads in response to face and leaf stimulus categories. Shaded areas demonstrate interval used for N250 measurement.
Figure 2
 
The amount of measured N250 activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 2
 
The amount of measured N250 activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 3
 
Exemplar ERP activities in (a) occipitotemporal (P7 and P8), (b) frontal (FP1 and FP2), and (c) central (C3, Cz, and C4) leads in response to stimuli with variable visibility. Brain activity mappings show activity distribution during (top) N170, (middle) VPP, and (bottom) P400 generation. In each row, arrows point to the recording sites. Shaded areas in top, middle, and bottom graphs demonstrate intervals used for measuring N250, post-VPP, and P400 activities, respectively.
Figure 3
 
Exemplar ERP activities in (a) occipitotemporal (P7 and P8), (b) frontal (FP1 and FP2), and (c) central (C3, Cz, and C4) leads in response to stimuli with variable visibility. Brain activity mappings show activity distribution during (top) N170, (middle) VPP, and (bottom) P400 generation. In each row, arrows point to the recording sites. Shaded areas in top, middle, and bottom graphs demonstrate intervals used for measuring N250, post-VPP, and P400 activities, respectively.
Figure 4
 
Bar graphs depict the amount of measured (a) N170, (b) VPP, and (c) P400 activities during (right column) face and (left column) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 4
 
Bar graphs depict the amount of measured (a) N170, (b) VPP, and (c) P400 activities during (right column) face and (left column) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 5
 
Scatter plots demonstrate the amount of correlation between N250 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. In each graph, circle and square dots represent individual subjects' brain activities during one specific SOA condition in face and leaf detection tasks, respectively. Solid and dashed lines also represent the best fitted regression lines for the activities during face and leaf detection tasks, respectively. Printed numbers show the amount of correlation during (top) face and (bottom) leaf detection tasks.
Figure 5
 
Scatter plots demonstrate the amount of correlation between N250 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. In each graph, circle and square dots represent individual subjects' brain activities during one specific SOA condition in face and leaf detection tasks, respectively. Solid and dashed lines also represent the best fitted regression lines for the activities during face and leaf detection tasks, respectively. Printed numbers show the amount of correlation during (top) face and (bottom) leaf detection tasks.
Figure 6
 
The amount of correlation between N250 activity and P400 activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 6
 
The amount of correlation between N250 activity and P400 activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 7
 
The amount of correlation between P400 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 7
 
The amount of correlation between P400 activity and (right) N170 and (left) VPP activities during (a) face, (b) leaf, and (c) other object presentation. Other figure information is the same as Figure 5.
Figure 8
 
The amount of measured post-VPP activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Figure 8
 
The amount of measured post-VPP activities during (a) face and (b) leaf detection tasks for each specific SOA condition. In each graph, bars represent brain response to face (dark gray), leaf (light gray), object (white), and mask-only (black) stimuli. Error bars represent one standard error of mean.
Table 1
 
Subjects' hit rate and correct rejection rate during face and leaf detection tasks.
Table 1
 
Subjects' hit rate and correct rejection rate during face and leaf detection tasks.
SOA Face detection task Leaf detection task
10 ms 20 ms 30 ms 500 ms 10 ms 20 ms 30 ms 500 ms
Hit rate (mean ± ( SD)) 33.0 (25.9) 53.7 (23.9) 80.3 (15.9) 87.4 (21.7) 47.9 (25.7) 61.8 (25.5) 71.94 (22.5) 93.1 (7.3)
Correct rejection rate (mean ± ( SD)) 85.2 (10.1) 87.7 (9.5) 92.5 (8.7) 95.9 (7.1) 74.6 (12.7) 80.2 (12.9) 87.6 (7.8) 93.7 (8.9)
Table 2
 
Correlation coefficients measured between normalized N250 and other category-selective ERP components.
Table 2
 
Correlation coefficients measured between normalized N250 and other category-selective ERP components.
Face detection Leaf detection
Face Leaf Others Face Leaf Others
N170 0.475** 0.384** 0.268 0.658** 0.210 0.367**
VPP −0.207 −0.285* 0.126 0.178 −0.197 0.177
P400 −0.139 0.317* 0.259 0.240 0.190 0.188
 

*Correlation is significant at the 0.05 level.

 

**Correlation is significant at the <0.01 level.

Table 3
 
Correlation coefficients measured between normalized P400 and preceding N170 and VPP components.
Table 3
 
Correlation coefficients measured between normalized P400 and preceding N170 and VPP components.
Face detection Leaf detection
Face Leaf Others Face Leaf Others
N170 −0.511** −0.127 −0.193 −0.384** −0.409** −0.067
VPP 0.850** 0.772** 0.685** 0.708** 0.669** 0.784**
 

*Correlation is significant at the 0.05 level.

 

**Correlation is significant at the <0.01 level.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×