Free
Research Article  |   June 2009
The initial representation of individual faces in the right occipito-temporal cortex is holistic: Electrophysiological evidence from the composite face illusion
Author Affiliations
Journal of Vision June 2009, Vol.9, 8. doi:https://doi.org/10.1167/9.6.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Corentin Jacques, Bruno Rossion; The initial representation of individual faces in the right occipito-temporal cortex is holistic: Electrophysiological evidence from the composite face illusion. Journal of Vision 2009;9(6):8. https://doi.org/10.1167/9.6.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Identifying a facial feature (e.g. the eyes) is influenced by the position and identity of other features (e.g. the mouth) of the face, supporting the view that an individual face is represented as a whole in the human brain. To clarify how early in the time-course of face processing this holistic individual representation is accessed we recorded event-related potentials during an adaptation paradigm of the composite face illusion (CFI). Observers performed a matching task on top halves of two faces presented sequentially. For each face pair, top and bottom face halves could be both identical, both different, or only the bottom half differed. The signal was larger over the right occipito-temporal cortex at about 160 ms (N170) when the attended top half differed between the two faces than when identical top halves were repeated. Crucially, a larger N170 was also found when the top halves of the two faces were the same, yet the observers had the illusion that they differed (CFI). This effect was not found when the two face halves were spatially misaligned. These observations indicate that the earliest perceptual representation of an individual face in the human brain is holistic rather than based on independent face parts.

Introduction
The speed at which complex visual stimuli (i.e. object, faces) are categorized in the human brain is a topic of high interest in Cognitive Neuroscience (Rossion & Jacques, 2008; Thorpe, Fize, & Marlot, 1996). Of particular interest is the question of when, and along which processing time-course, different features of a visual stimulus are integrated into a global representation of the stimulus. A human face is perhaps the most relevant visual stimulus to address this question given that it is composed of multiple cues such as external and internal features (eyes, ears, nose, mouth, etc.), the relative distances between these features (e.g. inter-ocular distance), reflectance properties of the skin surface (color and texture) and three-dimensional global shape of the face. It is generally acknowledged that these facial cues are not processed independently from each other but that the individual face is perceived as a gestalt, or a “holistic” representation (Farah, Wilson, Drain, & Tanaka, 1998; Galton, 1883; Rossion, 2008b; Sergent, 1984; Tanaka & Farah, 1993). Evidence that an individual face is processed holistically come from the demonstration that the perception of a facial feature is affected by alterations to the identity or the position of one or several other features of the face (e.g. Farah et al., 1998; Sergent, 1984; Tanaka & Farah, 1993; Young, Hellawell, & Hay, 1987). The most compelling illustration of this phenomenon comes from an adaptation of the so-called “composite face effect” (Young et al., 1987) to create a visual illusion (Rossion, 2008b) in which identical top halves of faces are perceived as being slightly different if they are aligned with different bottom halves ( Figure 1). This composite face illusion (CFI) is a particularly clear demonstration that the features of a face (here the two halves of a single face) cannot be perceived independently from each other, but are rather integrated into an undecomposed whole representation. 
Figure 1
 
Stimuli and time-line of the experiment. A. Examples of pairs of composite faces (adapting and test) for the 6 conditions used in the experiment. Note that the top parts of the composite faces in middle row are perceived as being slightly different in the aligned face format (left) despite the fact that they are identical and that only the bottom parts of the two composite faces are different (= composite face illusion). Notably, this illusion does not occur in the misaligned format (right). B. Time-line of the stimulus sequence in each trial.
Figure 1
 
Stimuli and time-line of the experiment. A. Examples of pairs of composite faces (adapting and test) for the 6 conditions used in the experiment. Note that the top parts of the composite faces in middle row are perceived as being slightly different in the aligned face format (left) despite the fact that they are identical and that only the bottom parts of the two composite faces are different (= composite face illusion). Notably, this illusion does not occur in the misaligned format (right). B. Time-line of the stimulus sequence in each trial.
However, it remains to be determined whether a face stimulus is first/initially represented as a collection of independent features which are later integrated into a holistic representation (i.e., a hierarchical mode of face representation, e.g., Haxby, Hoffman, & Gobbini, 2000; Jiang et al., 2006; Ullman, 2007), or if the initial representation of an individual face in the human brain is already global/holistic (e.g., Sergent, 1986). 
Here we addressed this question by recording scalp event-related potentials (ERP) during an identity adaptation paradigm (Caharel, d'Arripe, Ramon, Jacques, & Rossion, 2009; Ewbank, Smith, Hancock, & Andrews, 2008; Harris & Nakayama, 2008; Jacques, d'Arripe, & Rossion, 2007; see also Kovacs et al., 2006 for a general adaptation paradigm to the face category) of the composite face illusion. Specifically, in each trial observers were presented with an adapting face composed of distinct top and bottom parts (i.e. a composite face) for 3 seconds, rapidly followed by a second (test) composite face. The adapting and test composite faces could either (1) have identical top and bottom face halves (same), (2) differ only in their bottom halves (bottom different), or (3) differ in both their top and bottom halves (top + bottom different) ( Figure 1). In addition, adapting and test faces were either presented both in an aligned or both in a misaligned format, since it is known that spatial misalignment of the face halves disrupts holistic processing (Young et al., 1987; Figure 1). In line with previous adaptation studies, we expected that the presentation of 2 different composite faces (i.e. top + bottom different) would lead to a larger amplitude (release from adaptation) as early as the face-sensitive N170 component on occipito-temporal recording sites (Jacques et al., 2007). Most importantly, if the initial representation of individual faces is holistic (i.e. global), we expected a significant release from adaptation on the N170 when the fixated top parts of the adapting and test faces were identical but were perceived as different (i.e. bottom different condition in aligned format). If this effect takes place as early as individual face representations are accessed (i.e., N170, Jacques et al., 2007; Jacques & Rossion, 2006), this would support the view that the initial representation of an individual face in the human brain is inherently global/holistic. However, if the early sensitivity to facial identity at the level of the N170 is based on independent individual facial features, then we should not observe a larger release from identity adaptation in this condition (bottom different condition in aligned format) as early as the N170 as compared to misaligned faces. Such differential effects, if not observed at the N170, may take place rather at later time points during the course of face processing. 
Methods
Subjects
Seventeen paid volunteers (5 men, 1 left-handed, mean age 21.6 ± 1.7 years) participated in the experiment. All participants had normal or corrected-to-normal vision. 
Stimuli
Stimuli used in this experiment were constructed from 30 top parts of faces (17 males and 13 females) and 43 bottom parts of faces (23 males and 20 females). Face tops and bottoms were taken from whole faces cropped to remove background, hair and everything below the chin and horizontally cut in two halves above the nostrils. All resulting top and bottom parts were converted to grayscale and equated for mean pixel luminance. Composite face stimuli were then created by pairing each face top with 2 different bottom parts (some bottom parts being used for more than one top), resulting in 30 pairs of stimuli; each pair with an identical top part but different bottom parts (see Figure 1). The created pairs of composite faces were either formed by top and bottom face parts precisely aligned to each other (“aligned” format) or were formed by the very same top and bottom parts with the bottom slightly offset to the right (30 pixels = 0.6° of visual angle). These stimuli with offset bottom parts are commonly called “misaligned” composite faces. Aligned faces subtended 2.5° in width × 3.4 in height of visual angles, while misaligned faces subtended 3.1° in width × 3.4 in height (see Figure S1). A small gap of 3 pixels in height (<0.1°) between the top and the bottom parts was used so that participants could easily identify the border of the top and bottom parts in the aligned condition (e.g. Goffaux & Rossion, 2006). 
Procedure
Stimuli were displayed against a light gray background using E-prime 1.1 (PST) at 110 cm viewing distance. In each trial, two face stimuli were presented sequentially, either both in the aligned or both in the misaligned format. Both stimuli were presented at the center of the screen and overlapped completely. A trial started with a fixation point displayed at the center of the screen for 200 ms ( Figure 1). Two hundred milliseconds after the offset of the fixation point (randomized between 100 and 300 ms), the first face (adapting stimulus) appeared for about 2800 ms (2600 to 3000 ms) with the top part being presented at fixation. The offset of the adapting face was followed by an interval of random duration (150 to 350 ms), and then the second face (test face) for 200 ms. An inter-trial-interval of about 1600 ms (1500 to 1700 ms) separated the offset of the test face from the next trial. Identity of the top and the bottom parts were manipulated between the adapting and the test face in three conditions. In one third of the trials, both the top and bottom parts of the test face stimulus were identical to those of the adapting face ( Figure 1, top row—“same” condition). In a second third of the trials, only the bottom part of the test face stimulus was different from that of the adapting face, the top part being identical ( Figure 1, middle row—“bottom different” condition). This condition was critical as it leads to the perception of a new identity in the aligned and not in the misaligned format. For the last third of the trials, both the top and bottom parts of the test face stimulus were different from those of the adapting face ( Figure 1, bottom row—“top + bottom different” condition). Importantly, the same pairs of faces were presented in the aligned and misaligned orientations, so that any effect cannot be attributed to the particular pairing of adapting and test faces. Participants were instructed to attend only to the top part of the face (appearing at fixation) and to press one of two response keys (keys counterbalanced across subjects) corresponding to whether the adapting and test top parts were the same or different pictures. Previous studies have shown that participants maintain gaze fixation on the top part of the face according to the instructions, and nevertheless show a strong composite face effect in this condition (de Heering, Rossion, Turati, & Simion, 2008). Note that, according to these instructions, both the same and bottom different conditions required a “same” response and only the top + bottom different condition required a “different” response. Participants performed 90 trials per condition, resulting in 540 trials. Consequently, each top parts of faces was repeated 18 times as adapter and 18 times as test stimuli and each bottom face part was repeated either 12 or 18 times as adapter and either 12 or 18 times as test stimuli. The order of conditions was randomized within each block. 
EEG recording
Scalp EEG was recorded from 58 tin electrodes mounted in an electrode cap (Quik-cap, Neuroscan Inc.), with a left earlobe reference. Two pairs of bipolar electrodes were used to record vertical and horizontal eye movements. Electrode impedances were kept below 10 kΩ. EEG analog signal was digitized at a 1000-Hz sampling rate and a digital anti-aliasing filter of 0.27 * sampling rate was applied at recording (at 1000 Hz sampling rate, the usable bandwidth is 0 to approximately 270 Hz). EEG data were analyzed using EEprobe 3.2 (ANT, Inc.) and Matlab. After filtering of the EEG with a digital 30-Hz low-pass filter, time windows in which the standard deviation of the EEG on any electrode within a sliding 200-ms time window exceeded 35 μV were marked as either EEG artifacts or blink artifacts. Blink artifacts were corrected by subtraction of a vertical electrooculogram (EOG) propagation factors based on EOG components derived from principal component analyses. Trials containing EEG artifacts were rejected. Participants' averages were baseline corrected using the 100-ms prestimulus epoch and then re-referenced to a common average reference. 
ERP analyses
Two separate sets of analyses were performed on the ERP signal recorded at the scalp in response to the second face of the trial sequence (test face). First, we performed conventional analyses on specific visual potentials, as classically done in studies of face perception (e.g. Bentin, McCarthy, Perez, Puce, & Allison, 1996; Itier & Taylor, 2002; Rossion et al., 1999): the P1 (maximal at approximately 105 ms) and N170 (maximal at approximately 165 ms) components, recorded at posterior sites. Amplitude values of the P1 and the N170 were measured at five pairs of occipito-temporal electrodes in the left and right hemisphere where both components were the most prominent (P7/8, P5/6, PO7/8, PO5/6, O1/2). Amplitudes were quantified for each condition as the mean voltage measured within 30-ms windows centered on the grand average peak latencies of the components' maximum. The amplitude values of each component were then submitted to separate repeated measures analyses of variance (ANOVA) with the factors alignment (aligned, misaligned), adaptation (same, bottom different, top + bottom different), hemisphere (left, right) and electrode (five levels). In ERP analyses for the bottom-different condition, we included data for both correct and incorrect responses and not only for incorrect responses for two reasons. First, incorrect trials were too few (mean = 16.5 trials; SD = 15.4 trials) to be able to get a reliable ERP waveform for each subject. Second, the composite illusion does not only result in incorrect responses, it also generates increased response times even when subjects provide a correct response (e.g., Rossion & Boremanse, 2008; Young et al., 1987; see Figure 2). It is therefore almost impossible to determine whether the illusion took place or not on any given trial based on behavioral data, and all trials may be relevant in the experiment. 
Figure 2
 
Behavioral results for the matching task of composite faces. A. Error rates are shown on the left and response times on the right of the figure. Error bars are standard errors. B. Distribution of correct (full lines) and incorrect (dashed lines) response times in the composite task, separately for same and bottom-different conditions in the aligned and misaligned face format. The number of responses in successive 20 ms time-bins is plotted as a function of time from stimulus (test face) onset. Numbers in parenthesis represent respectively the mean and median response times over all correct trials. For aligned trials, the composite face illusion affected even the very early part of the distribution.
Figure 2
 
Behavioral results for the matching task of composite faces. A. Error rates are shown on the left and response times on the right of the figure. Error bars are standard errors. B. Distribution of correct (full lines) and incorrect (dashed lines) response times in the composite task, separately for same and bottom-different conditions in the aligned and misaligned face format. The number of responses in successive 20 ms time-bins is plotted as a function of time from stimulus (test face) onset. Numbers in parenthesis represent respectively the mean and median response times over all correct trials. For aligned trials, the composite face illusion affected even the very early part of the distribution.
A second set of analyses was carried out to characterize more precisely the time-course of face holistic processing in the human brain during this experiment. Specifically, we analyzed ERP data at each electrode as a function of time in a series of pair-wise comparisons between the different conditions. The first two comparisons were performed between the ERP elicited in the same condition and the ERP elicited in the top + bottom different condition, both for aligned and misaligned faces. Given that these comparisons were carried out between the condition where both the top and bottom of the face are repeated (same) and the condition where both the top and bottom of the face are not repeated (top + bottom different), we considered these comparisons to reflect an effect of adaptation to a whole individual face. 
The next two comparisons were performed between the same condition and the bottom different condition, again both for aligned and misaligned faces. These comparisons were carried out between the condition where both the top and the bottom of the face are repeated (same) and the condition in which the identity of the top of the test face was perceived as different in the aligned condition only, as the result of the composite illusion (i.e. adaptation to identity in composite illusion). To statistically identify the onset latency of the differential ERP responses between the conditions compared, we performed a permutation test (see Blair & Karniski, 1993; Nichols & Holmes, 2002) on each scalp electrode and each time sample. This method was used previously to assess the time course of ERP face identity adaptation effects (Jacques et al., 2007). In a given permutation sample, the ERP data (consisting of the whole electrode * time-point matrix) for the two conditions compared are randomly permuted within each subject (i.e. paired comparisons) to obtain two new bins of size N. Because permutation shuffles the assignment of the conditions, the difference between the means of the two new bins reflects the difference between conditions under the null hypothesis. We performed 5000 of all 2 17 possible permutations to generate a distribution of ERP differences under the null hypothesis. Comparing the observed ERP difference between the two conditions compared with the permutation distribution allowed estimating the probability that this observed ERP difference was due to chance (i.e. a p-value). The results of this analysis are displayed in time by electrode statistical plots in which significant differences between conditions compared are color-coded as a function of the amplitude of ERP difference (see Figure 6). To minimize the probability of type I errors (false positives) due to the large number of comparisons performed, only significant differences at the p < 0.01 (two-tailed), that lasted for at least 20 consecutive time-samples and included a cluster of at least two neighbor electrodes were considered. 
Results
Behavioral results
Error rates and response times at the face top part matching task ( Figure 2) were submitted to repeated measures ANOVAs with the factors adaptation (3 levels) and alignment (2 levels). 
The ANOVA performed on error rates revealed a significant effect of adaptation ( F(1.1, 18.2) = 9.1, p < 0.01), alignment ( F(1, 16) = 12.6, p < 0.005) and a significant interaction between the two factors ( F(1.1, 17.8) = 22.1, p < 0.0005). As expected, participants made many more errors with aligned compared to misaligned faces in the bottom different condition ( p < 0.0001). Moreover, participants made also more errors in misaligned compared to aligned faces in the top + bottom different condition ( p < 0.05). Last, participants made more errors in the top + bottom different compared to same ( p < 0.05), probably due to the oddity of the correct response in this condition (“different”). 
The ANOVA performed on correct response times also revealed a significant effect of adaptation ( F(1.4, 22.7) = 14.6, p < 0.0005), alignment ( F(1, 16) = 32.3, p < 0.0001) and a significant interaction between the two factors ( F(1.2, 18.9) = 23.6, p < 0.0001). The interaction revealed that participants were much slower for aligned compared to misaligned faces in the bottom different condition ( p < 0.0001). They were also slightly slower in aligned compared to misaligned faces in the same condition ( p < 0.05) and the opposite was observed in the top + bottom different ( p < 0.05). In addition, they were generally slower in the top + bottom different condition ( p < 0.005). 
In summary, we observed a strong composite face effect (i.e. increased errors and response times for aligned compared to misaligned in the bottom different condition) in our matching task, replicating the results of numerous previous behavioral studies (e.g., Goffaux & Rossion, 2006; Le Grand, Mondloch, Maurer, & Brent, 2004; Rossion & Boremanse, 2008). 
Electrophysiological results
ERP components analyses
Grand average ERP waveforms elicited by the test faces in the different conditions are depicted in Figure 3 for two occipito-temporal electrodes (PO7/PO8). 
Figure 3
 
Grand average ERP waveforms elicited by the test face and histograms of the N170 amplitude. A. Grand average ERP waveforms elicited by the test face in the aligned (top row) and misaligned (bottom row) face format at occipito-temporal electrodes PO7 and PO8 (left and right hemisphere respectively). The ERP to the same, bottom different and top + bottom different conditions are displayed for each format. B. Histograms of the amplitude of the N170 in the different conditions averaged over 5 electrodes in each hemisphere (see methods). C. Histograms of the N170 amplitude difference (averaged over the 5 electrodes in each hemisphere) between bottom different and same conditions and between top + bottom different and same conditions as a function of format and hemisphere. Error bars are standard errors of the mean computed after normalizing the data to remove subject variability (Loftus & Masson, 1994).
Figure 3
 
Grand average ERP waveforms elicited by the test face and histograms of the N170 amplitude. A. Grand average ERP waveforms elicited by the test face in the aligned (top row) and misaligned (bottom row) face format at occipito-temporal electrodes PO7 and PO8 (left and right hemisphere respectively). The ERP to the same, bottom different and top + bottom different conditions are displayed for each format. B. Histograms of the amplitude of the N170 in the different conditions averaged over 5 electrodes in each hemisphere (see methods). C. Histograms of the N170 amplitude difference (averaged over the 5 electrodes in each hemisphere) between bottom different and same conditions and between top + bottom different and same conditions as a function of format and hemisphere. Error bars are standard errors of the mean computed after normalizing the data to remove subject variability (Loftus & Masson, 1994).
At the level of the P1 (see Figure 4), a significant effect of alignment ( F(1, 16) = 13.1, p < 0.005), qualified by a significant interaction between alignment and hemisphere ( F(1, 16) = 16.6, p < 0.001) revealed that the P1 was larger in the misaligned than in the aligned condition, but only in the left hemisphere ( p < 0.001), not in the right hemisphere ( p = 0.43). There was also a significant effect of adaptation ( F(1.8, 29.2) = 8.8, p < 0.005), indicating a larger P1 in the top + bottom different condition relative to same ( p < 0.005) and bottom different ( p < 0.005) conditions. The P1 amplitude for these latter two conditions did no differ ( p = 0.57). The effect of adaptation was slightly larger on the most posterior electrodes (PO7/8, PO5/6, O1/2) as revealed by a significant adaptation × electrode interaction ( F(4.5, 71.3) = 3.3, p < 0.05). However, the adaptation effect was significant on all 5 pairs of electrodes analyzed. Importantly, the adaptation × alignment interaction was not significant ( F(1.6, 25) = 0.8, p = 0.43), indicating that the adaptation effect did not differ between aligned and misaligned faces at the P1 level. 
Figure 4
 
Histograms of the amplitude of the P1 in the different conditions averaged over 5 electrodes in each hemisphere (see Methods).
Figure 4
 
Histograms of the amplitude of the P1 in the different conditions averaged over 5 electrodes in each hemisphere (see Methods).
Misaligned test faces elicited a larger N170 than aligned faces ( F(1, 16) = 24.5, p < 0.005) (see Letourneau & Mitchell, 2008). The main effect of adaptation was not significant ( F(1.6, 26.2) = 2.5, p = 0.1) but, importantly, a significant interaction between alignment and adaptation ( F1.5, 24.1) = 4.1, p < 0.05) revealed that the effect of adaptation was significant in the aligned format ( p < 0.01) and not in the misaligned format ( p = 0.52) ( Figure 3). An additional ANOVA performed only on the N170 recorded in the aligned format revealed a significant interaction between adaptation and hemisphere ( F(1.7, 26.9) = 4.2, p < 0.05). In the right hemisphere, the N170 was larger for bottom different compared to same ( p < 0.01), and for top + bottom different compared to same ( p < 0.005). The N170 elicited by top + bottom different was only marginally larger than for bottom different ( p = 0.07). In the left hemisphere, the only marginally significant effect was a larger N170 for top + bottom different compared to same ( p = 0.09), the other two comparisons being non-significant ( p's > 0.11). In addition to the analyses performed on the N170 elicited by correct and incorrect response trials (see Methods), we performed an ANOVA on the N170 elicited by correct trials only, comparing same to bottom-different conditions in the aligned format. Similar to the when the analyses was performed on correct and incorrect trials, this ANOVA revealed a significantly larger N170 for the bottom-different compared to the same condition ( F(1,16) = 10.3, p < 0.01). The adaptation × hemisphere was no longer significant ( F(1,16) = 2.2, p = 0.16) even though the adaptation effect was still much more reliable in the right ( p = 0.003) than in the left ( p = 0.04) hemisphere. 
To summarize, the N170 was larger for bottom-different and top + bottom different compared to the same condition. This effect was only observed in the aligned and not in the misaligned condition, in line with the composite effect observed on behavioral data, and was found only over right hemisphere electrodes. In other words, the differential N170 amplitude between bottom-different and same conditions is consistent with the perceptual illusion (i.e. top parts of adapting and test faces are perceived as different even though only the bottom parts are different) that occurs only in the aligned format. 
The time course of individual face holistic encoding
Point-by-point permutation tests were performed to compare same to top + bottom different conditions and same to bottom-different conditions in aligned and misaligned format. The differences between conditions compared are plotted as subtraction waveforms in Figure 5 (left part) for 58 electrodes, with 3 occipito-temporal electrodes in each hemisphere being highlighted. 
Figure 5
 
Subtraction waveforms comparing same to top + bottom different conditions (left) and comparing same to bottom different conditions (right) in the aligned (top row) and misaligned (bottom row) format for 58 electrodes. Highlighted electrodes are three pairs of occipito-temporal (OT) electrodes in the left (dotted black lines) and right (full black lines) hemispheres. The location of these three electrodes is shown on the topographic maps (view from above) in each plot. Each map shows the distribution of ERP difference at 200 ms after stimulus onset. The amplitude scale for the topographical maps is [−2, 2] for the left column and [−1, 1] for the right column.
Figure 5
 
Subtraction waveforms comparing same to top + bottom different conditions (left) and comparing same to bottom different conditions (right) in the aligned (top row) and misaligned (bottom row) format for 58 electrodes. Highlighted electrodes are three pairs of occipito-temporal (OT) electrodes in the left (dotted black lines) and right (full black lines) hemispheres. The location of these three electrodes is shown on the topographic maps (view from above) in each plot. Each map shows the distribution of ERP difference at 200 ms after stimulus onset. The amplitude scale for the topographical maps is [−2, 2] for the left column and [−1, 1] for the right column.
Time by electrodes plots of the significant identity adaptations (i.e. same vs. top + bottom different) are shown in the right column of Figure 6. Consistent identity adaptation effects were observed at around 160 ms in the aligned format and slightly later (185 ms) and smaller in the misaligned format. Interestingly, these differences were both restricted to occipito-temporal electrodes in the right hemisphere, with concomitant difference of opposite polarity at anterior centro-temporal sites in the left hemisphere ( Figures 5 and 6). This early adaptation effect lasted until about 280 ms both for the aligned and misaligned conditions. 
Figure 6
 
Time by electrode statistical plots of the significant ERP differences between conditions. Left column: statistical plots of the significant differences ( p < 0.01; two-tailed; 5000 permutations) between same and top + bottom different conditions in the aligned (top row) and misaligned (bottom row) format. Right column: significant differences between same and bottom different in the aligned (top row) and misaligned (bottom row) format. Only significant differences are color-coded as a function of the amplitude of the difference between the ERP waveforms compared. The 58 electrodes are represented on the y-axis and grouped as a function of their location in frontal (F), central (C) and posterior (P) scalp region, as well as left hemisphere (L), midline (M) and right hemisphere (R). Topographic maps (view from above the head) of the significant differences at five different time points are displayed next to each plot. Each map is an average of 20 ms ERP signal and the black arrows on the lower × axis of each plot indicate the temporal location of each topographic map. Note that the amplitude scale for maps [160–180], [180–200], [200–220] and [260–280] ms is [−1.3, 1.3] μV and the scale for the [400–420] ms map is [−2.5, 2.5] μV.
Figure 6
 
Time by electrode statistical plots of the significant ERP differences between conditions. Left column: statistical plots of the significant differences ( p < 0.01; two-tailed; 5000 permutations) between same and top + bottom different conditions in the aligned (top row) and misaligned (bottom row) format. Right column: significant differences between same and bottom different in the aligned (top row) and misaligned (bottom row) format. Only significant differences are color-coded as a function of the amplitude of the difference between the ERP waveforms compared. The 58 electrodes are represented on the y-axis and grouped as a function of their location in frontal (F), central (C) and posterior (P) scalp region, as well as left hemisphere (L), midline (M) and right hemisphere (R). Topographic maps (view from above the head) of the significant differences at five different time points are displayed next to each plot. Each map is an average of 20 ms ERP signal and the black arrows on the lower × axis of each plot indicate the temporal location of each topographic map. Note that the amplitude scale for maps [160–180], [180–200], [200–220] and [260–280] ms is [−1.3, 1.3] μV and the scale for the [400–420] ms map is [−2.5, 2.5] μV.
Time by electrodes plots of the significant identity adaptation effect in the composite illusion (i.e. same vs. bottom different) are shown in the right column of Figure 6. In the aligned format, a significant difference ( p < 0.01, two-tailed) started at 165 ms on 3 right hemisphere low occipito-temporal electrodes (O2, PO8, P8; see Figure 6). This difference then spread to more electrodes at around 200 ms after the onset of the test face, but was still restricted to the right hemisphere. This early difference was also accompanied by an effect of opposite polarity at anterior temporal sites in the left hemisphere, although of smaller magnitude to what is observed in the identity adaptation effect. The spatio-temporal properties of the adaptation effect under the composite face illusion were thus strikingly similar (although its amplitude was smaller) to those observed in the identity adaptation measured when comparing same to top + bottom different (compare left and right part of the top row of Figures 5 and 6). In stark contrast to the differences observed in the aligned format, there was almost no difference observed when the exact same conditions were compared (same vs. bottom different) in the misaligned format. More precisely, the only consistent effect observed was a small difference at around 200 ms over parietal and centro-parietal electrodes. Thus, contrary to the aligned format that generated qualitatively comparable effects in the identity adaptation and composite illusion, the near-absence of effect in the misaligned format when the bottom is different is in contrast with the strong identity adaptation effect when both the top and bottom differs in misaligned faces (compare left and right part of the bottom row of Figures 4 and 5). 
In addition to the effect observed starting at 165 ms in the aligned format in the composite illusion, an earlier difference occurred around the onset of the N170 component in the 120–140 ms time-window over a few parieto-occipital electrodes, mostly in the left hemisphere (see right upper plot in Figure 6). These early differences were due to a slight latency differences between same and bottom different conditions at the level of the P1 peak, creating artificial amplitude differences in the N170 slope (slightly larger amplitude for same condition). An additional point-by-point analysis indicated that there was no longer significant difference between these conditions in the 120–140 ms time-window after correcting for the P1 latency difference (see 1). In contrast, the difference during the N170 time-window remained conspicuous. 
When considering later ERP responses (>300 ms post-stimulus), large differences were found between 350 ms and 450 ms, distributed over bilateral occipito-temporal regions with a polarity reversal over fronto-central electrodes. These differences were observed on all comparisons except in the misaligned format when comparing same to bottom different conditions ( Figure 6). In this last comparison, the only significant difference was observed from 440 to 520 ms over centro-parietal electrodes. A last strong difference started slightly before 500 ms only when comparing same to top + bottom different conditions. This effect is due to the presence of a large ERP component, maximal over parietal regions, in the ERP response to the top + bottom different condition ( Figure S2). This component is most probably a P300 component (see Polich, 2007 for a review) generated due to the fact that these conditions (aligned and misaligned) required a “different” response which had a smaller probability than “same” response (1/3 vs. 2/3 of the trials respectively) in the present study. 
Discussion
Replicating recent observations (Caharel et al., 2009; Ewbank et al., 2008; Jacques et al., 2007), we found that when an individual face is different from a previously presented adapter face, there is a release from adaptation starting at around 160 ms post-stimulus onset, in the right occipito-temporal cortex. That is, the N170 was larger in amplitude when pairs of different compared to repeated faces were presented, an effect that is also observed in some face identity repetition studies (e.g., Campanella et al., 2000; Heisz, Watter, & Shedden, 2006; Itier & Taylor, 2002; Jemel, Pisani, Calabria, Crommelinck, & Bruyer, 2003; but see e.g. Bentin & McCarthy, 1994; Schweinberger, Huddy, & Burton, 2004), but appears to be most consistent and large in face identity adaptation paradigms with a long presentation duration of the adapter face and a short inter-stimulus-interval, as used here (Caharel et al., 2009; Jacques et al., 2007). This observation indicates that different individual faces can be discriminated reliably in the human visual cortex as early as during the time-window of the so-called N170 ERP component (Bentin et al., 1996; Jeffreys, 1996; Rossion & Jacques, 2008 for a review). Unlike earlier and spurious differences that may arise from the repetition of physically identical images (i.e. at the level of the visual P1 component), this sensitivity to individual faces at 160 ms cannot be attributed to mere image-based differences, since it is abolished by flipping the exact same stimuli upside-down (Jacques et al., 2007), and is also found even when there is a substantial (30°) change of viewpoint between the adapter and the test face (Caharel et al., 2009). Here, interestingly, we found that the release from adaptation started slightly later (at about 185 ms) when the two halves of the faces were spatially misaligned, suggesting that disrupting the whole facial stimulus delays the coding of individual faces (as upside-down inversion does, see Jacques et al., 2007). Together with the finding that inverting faces abolishes the release from adaptation effect on the N170 (Jacques et al., 2007), this absence of a release from adaptation on the N170 for the top + bottom different misaligned condition suggests that the effects observed on the N170 in the aligned condition (i.e. same vs. bottom-different and same vs. top + bottom different) are not simply due to the magnitude of the physical change between adapting and test faces. 
The novel and most important observation of the present study is that when observers erroneously perceived two physically identical top parts of faces as being slightly different, increasing their mistakes and response times (the hallmark of the behavioral composite face illusion), there was also a release from adaptation in the electrophysiological response starting at about 165 ms, in the right hemisphere. This observation provides compelling evidence that as soon as the percept of the test face is sufficiently detailed to allow discriminating it from another individual face (the adapter), the parts that make-up the individual face stimulus are integrated into a global (i.e., holistic) representation. That is, the initial representation of an individual face in the human brain appears to be inherently holistic. Importantly, when the two halves of the face were spatially misaligned and thus the top part of the faces were correctly perceived as being identical, there was no release from adaptation, even though the bottom parts differed between the adapting and the test stimuli. In short, here we clarified the time-course of the composite face illusion in the human brain, a well-known phenomenon in the behavioral face processing literature which is taken as the strongest evidence that the facial features of an individual face are integrated into a whole face representation (Maurer, Le Grand, & Mondloch, 2002; Rossion, 2008b; Young et al., 1987). 
It is important to make a distinction between our findings and earlier reports of a sensitivity to the whole face stimulus at the level of the N170 (e.g., George, Jemel, Fiori, Chaby, & Renault, 2005; Latinus & Taylor, 2005; Letourneau & Mitchell, 2008). The behavioral literature indicates that faces are processed holistically both at the basic category level (i.e. categorizing the visual stimulus as “a face”, or detecting a face in a visual scene) and at the individual level (i.e. categorizing the stimulus as an individual face). Evidence that basic-level categorization of a visual stimulus as a face relies on holistic processing comes from the finding that subjects are slower to detect a face in a visual scene when the first-order organization of face is disrupted by upside-down inversion (Rousselet, Mace, & Fabre-Thorpe, 2003). Further evidence comes from the observation that, in certain circumstances basic-level categorization of the face depends heavily on a global representation, for instance when the observer perceives a face in two-tones “Mooney” face stimuli (Mooney, 1957; Moore & Cavanagh, 1998), or in a painting of the artist Arcimboldo (Hulten, 1987) which are composed on non-facial features. ERP studies have provided evidence that holistic processing of faces at the basic level takes place during the N170 time-window, since the amplitude of the N170 is larger when observers correctly perceive Mooney face stimuli (George et al., 2005; Latinus & Taylor, 2005) or faces in Arcimboldo paintings (Caharel et al., unpublished data; see Rossion & Jacques, 2008). Also, as observed here and in a recent study (Letourneau & Mitchell, 2008), breaking a single face stimulus in two separate parts affects the N170 amplitude, suggesting that the global first-order organization of the stimulus is coded at the level of the N170. However, this effect leads to a somewhat paradoxical increase of the N170 amplitude (Letourneau & Mitchell, 2008; the present study), similarly to the effect of face inversion (e.g. Itier & Taylor, 2002; Rossion et al., 1999), or to the presentation of isolated eyes on the N170 (Bentin et al., 1996; Itier, Alain, Sedore, & McIntosh, 2007). The meaning of this higher ERP amplitude to simple face stimulus transformations (usually associated with a delayed latency) is currently debated (see e.g. Bentin et al., 1996; Itier et al., 2007; Jacques et al., 2007; Rossion et al., 1999) and remains unclear. 
Here, in contrast to these latter studies investigating basic-level face categorization, we used the composite face illusion in conjunction with adaptation to clarify when an individual holistic face representation is activated in the visual cortex. At the behavioral level, holistic processing of individual faces is revealed by the finding that the perception of the identity of a feature is strongly influenced by the identity or the position of the other features of a face, such as in the “whole-part advantage” paradigm (Tanaka & Farah, 1993) or in the CFI (Goffaux & Rossion, 2006; Young et al., 1987; the present experiment). The present results thus indicate that populations of neurons in the right occipito-temporal cortex that participate in generating the N170 component code the two face halves as an undissociated representation of the whole individual face. 
The observation that the CFI takes place at around 160 ms in the N170 time-window poses serious constraints on the time-course and unfolding of face processes in the human brain. To understand their significance, these observations should be put in the general context of the time-course of face processing in the human brain. The earliest visual face-sensitive responses can be observed as early as 100 ms on the human scalp (P1 component, or M1 in MEG studies, e.g. Halgren, Raij, Marinkovic, Jousmäki, & Hari, 2000; Herrmann, Ehlis, Ellgring, & Fallgatter, 2005). However, these early P1/M1 effects appear to be related to low-level visual differences between faces and objects (e.g. differences in spatial frequency spectra, color distribution, global or local contrast) rather than to the activation of face representations in the human brain (Rossion & Jacques, 2008; Rousselet, Husk, Bennett, & Sekuler, 2008; Tanskanen, Nasanen, Montez, Paallysaho, & Hari, 2005). Solid evidence for activation of high-level face representations rather point to a time onset of about 120–130 ms (N170 time-window; Rossion & Jacques, 2008; Rousselet et al., 2008), a timing that is compatible with the average onset latency of face-selective cells in the monkey inferior-temporal cortex (at around 80–100 ms; Kiani, Esteky, & Tanaka, 2005), taking into account that these neurons should fire slightly later in the bigger human brain (see Foxe & Schroeder, 2005). This activation of facial representations during the N170 is based on the presence of clearly identifiable facial features and/or a first-order facial organization (Rossion & Jacques, 2008). As indicated above, the face representation starts to be sufficiently refined at about 160 ms to allow the discrimination between different individuals (Caharel et al., 2009; Ewbank et al., 2008; Jacques et al., 2007; Jacques & Rossion, 2006). 
Thus, given that the N170 represents an early stage of face perceptual encoding (Bentin et al., 1996; Eimer, 2000; Rossion & Jacques, 2008; Rousselet et al., 2008), it confirms that holistic processing of individual faces occurs at a perceptual level, in contrast to the recent suggestions that behavioral effects in the composite face paradigm have a decisional rather than a perceptual locus (Richler, Gauthier, Wenger, & Palmeri, 2008). While we do not question the fact that the composite visual illusion leads to decisional response biases (as found also here, see Figure 2), our observations with an ERP method allowing measuring directly the time-course of face processing nevertheless point to the perceptual origin of the effect. Most importantly, these observations suggest that as soon as the perceptual system is sensitive to the individuality of the face (∼160 ms), the individual features are processed in a holistic representation. That is, in the bottom-different aligned condition, the top part of the face is physically identical between the adapter and the test face, but it is perceived as different as early as the system is sensitive to individual faces. To put it differently, the holistic representation of an individual face is not accessed after an analysis of the local features of the face that would then be integrated into such a global representation. Hence, these observations are difficult to reconcile with a hierarchical process in which facial parts or fragments would be extracted first and then combined into a global representation of the individual face (e.g. Harris & Nakayama, 2008; Haxby et al., 2000; Jiang et al., 2006; Ullman, 2007). Rather, our observations suggests that, at the time window of the N170, the individual test face is processed as a whole and compared to an internal representation built previously from the adapter face, also stored as a holistic representation (a global template). Such a holistic representation contains information about the local shape of the features as well as their positions with respect to each other and to the global contour over the entire face (e.g., Barton, Zhao, & Keenan, 2003; Sergent, 1984; Young et al., 1987). Thus, changing the bottom half of the face between the adapter and the test modifies the perception of the whole face (Young et al., 1987; see Figure 1). Consequently, it is the whole test face, including the top half, which does not match the encoded template, leading to a release from adaptation in this condition. We also note that this effect, which emerges significantly at the level of the N170 and is prolonged during a longer time-window, is not as large as when the two halves of the face differ physically, since the whole faces differ even more largely in that condition. 
Importantly, our results and this interpretation do not mean that local internal facial features such as eyes, eyebrows or nose, or larger fragments (Harel, Ullman, Epshtein, & Bentin, 2007), would not be part of the face representation activated at the time of the N170 visual evoked potential. There is clear evidence that facial features modulate the electrophysiological signal within the time window of the N170 (e.g. Bentin et al., 1996; Eimer, 2000; Harris & Nakayama, 2008; Itier et al., 2007; Schyns, Jentzsch, Johnson, Schweinberger, & Gosselin, 2003; Zion-Golumbic & Bentin, 2007). Moreover, local facial features, and not only their global organization, are an important part of what defines an individual face, even in a holistic representation (e.g., Wallis, Siebeck, Swann, Blanz, & Bulthoff, 2008). However, what our results suggest is that the perception of a feature (here in the top half) in an individual face depends, from the outset, on the perception of the other feature(s) (here in the bottom half) that make the whole individual face. In other words, while individual local features are undoubtedly highly diagnostic of face identity, they do not appear to be represented initially as separate entities from the whole face. This proposal is compatible with a strong view of holistic face representation as advocated by Tanaka, Farah and colleagues (Farah et al., 1998; Tanaka & Farah, 1993) according to which faces are represented only holistically, or at least that the holistic representation precedes and overrules the representation of facial parts. 
This proposal of an initial holistic representation of the individual face is also in tune with the view that face processing evolves from a global coarse representation to a representation being progressively refined with finer resolution information, i.e., a coarse-to-fine mode of face perception (Sergent, 1986; Sugase, Yamane, Ueno, & Kawano, 1999). This coarse-to-fine mode of processing face stimuli is compatible with more general models of visual processing in which the perception or the awareness of the global shape/organization of an object or a visual scene precedes the analysis of local visual information (e.g. Flavell & Draguns, 1957; Hochstein & Ahissar, 2002; Navon, 1977). One of the common ground between these models is that the analysis of the global information may provide a first approximation of what the object or the scene is, and guide a more detailed visual scrutiny. For instance, in the “reversed hierarchy theory” (Hochstein & Ahissar, 2002), conscious perception of the global shape first occurs in high-level visual areas containing neurons with larger receptive fields. Finer-grained representations could then be obtained through reentrant connections with lower-level visual areas containing neurons with smaller receptive field and capable or representing more local information (see also Lamme & Roelfsema, 2000; Mumford, 1992). Such a coarse-to-fine mode of processing, with an initial activation in high-level visual areas sensitive to faces and reentrant connections with lower visual areas to refine the representation, would also be suitable with the processing of an individual face in the human brain (Rossion, 2008a). More precisely, there is evidence from fMRI adaptation studies that among the face-sensitive visual areas of the occipito-temporal cortex (see Haxby et al., 2000; Sergent & Signoret, 1992), the higher level “fusiform face area (‘FFA’, Kanwisher, McDermott, & Chun, 1997) in the right hemisphere encode faces holistically (Harris & Aguirre, 2008; Rossion et al., 2000; Schiltz & Rossion, 2006). Most interestingly, holistic representation of individual faces in the right ‘FFA’ was recently found using a composite face illusion in fMRI, both in a block design (Schiltz & Rossion, 2006) and an event-related paradigm similar to the present study (Schiltz et al., submitted). Along these lines, it should be noted also that in the present study the early release from adaptation due to the integration of features into a holistic individual face representation was found only over the right occipito-temporal scalp region. These observations are compatible with the well-known dominance of the right hemisphere in processing faces in general (e.g. Sergent & Signoret, 1992), and holistic face representations in particular (e.g., Hillger & Koenig, 1991). Besides the FFA, the face-sensitive area in the inferior occipital gyrus (the ‘occipital face area’; Gauthier et al., 2000) also encodes face holistically as revealed by the CFI (Schiltz & Rossion, 2006), although less consistently than in the ‘FFA’, and might therefore also contribute to the effect observed in the present study. 
In conclusion, the present study provides evidence that as early as an individual face is encoded in the right hemisphere at about 160 ms following stimulus onset, the representation of that face is holistic rather than based on independent features. 
Supplementary Materials
Supplementary Figure 1 - Supplementary Figure 1 
Supplementary Figure 1. Size of the composite face stimuli in aligned and misaligned format. 
Supplementary Figure 2 - Supplementary Figure 2 
Supplementary Figure 2. Grand averaged ERP waveforms at parietal electrode PZ showing the P300 for the top + bottom different conditions both in aligned (solid lines) and misaligned (dashed lines) face format. Below: topographic maps of the significant difference (p < 0.01) between same and top + bottom different conditions averaged over 580–600 ms post-stimulus onset (see gray vertical bar on ERP plot), showing the large P300 over centro-parietal electrodes. 
Appendix A
To ensure that the adaptation effect observed in the composite illusion (same vs. bottom different in aligned format) during the N170 time-window was not due to the latency difference at the level of the P1 peak and creating artificial differences in the 120–140 ms window, we performed the point-by-point analysis on the ERP data corrected for this latency difference. Specifically, we first measured the grand-average P1 latency in each condition based on the global field power (i.e. the standard deviation across all electrodes at each time-point; Lehmann & Skrandies, 1980) and then re-aligned the ERP waveforms of each subject based on the difference between the P1 latency averaged across conditions and the P1 latency for a given condition. In the four comparisons described above, only the comparison between same and bottom different conditions in the align format was affected by this latency correction. That is, the P1 measured on the global field power was 1 ms earlier in the bottom different than in the same condition. We then performed the point-by-point analysis on the latency corrected ERP waveforms for the same vs. bottom different comparison only. In this comparison, after correcting for the P1 latency there was no longer significant difference between conditions in the 120–140 ms time-window ( Figure A1). In contrast, the difference during the N170 time-window remained conspicuous. 
Figure A1
 
Statistical plots of the comparison between same and bottom different conditions in the aligned format, performed on ERP data corrected for a P1 latency difference (right) or not corrected for this difference (left). Note that the differences observed in the 120–140 ms time-window mostly over left posterior electrodes in the uncorrected data are no longer present in the corrected comparison. Only significant differences ( p < 0.01; two-tailed; 5000 permutations) are color-coded as a function of the amplitude of the difference between the ERP waveforms compared.
Figure A1
 
Statistical plots of the comparison between same and bottom different conditions in the aligned format, performed on ERP data corrected for a P1 latency difference (right) or not corrected for this difference (left). Note that the differences observed in the 120–140 ms time-window mostly over left posterior electrodes in the uncorrected data are no longer present in the corrected comparison. Only significant differences ( p < 0.01; two-tailed; 5000 permutations) are color-coded as a function of the amplitude of the difference between the ERP waveforms compared.
Acknowledgments
Corentin Jacques and Bruno Rossion are supported by the Belgian National Fund for Scientific Research (Fonds de la Recherche Scientifique—FNRS). This work was supported by an ARC (Actions de Recherche Concertées) grant 07/12-007, Communauté Française de Belgique. 
Commercial relationships: none. 
Corresponding author: Corentin Jacques. 
Email: corentin.g.jacques@uclouvain.be. 
Address: Université Catholique de Louvain (UCL), Faculté de Psychologie et des Sciences de l'Education (PSP), 10, Place du Cardinal Mercier, 1348 Louvain-la-Neuve, Belgium. 
References
Barton, J. J. Zhao, J. Keenan, J. P. (2003). Perception of global facial geometry in the inversion effect and prosopagnosia. Neuropsychologia, 41, 1703–1711. [PubMed] [CrossRef] [PubMed]
Bentin, S. McCarthy, G. (1994). The effects of immediate stimulus repetition on reaction time and event-related potentials in tasks of different complexity. Journal of Experimental Psychology: Learning Memory and Cognition, 20, 130–149. [CrossRef]
Bentin, S. McCarthy, G. Perez, E. Puce, A. Allison, T. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. [CrossRef] [PubMed]
Blair, R. C. Karniski, W. (1993). An alternative method for significance testing of waveform difference potentials. Psychophysiology, 30, 518–524. [PubMed] [CrossRef] [PubMed]
Caharel, S. d'Arripe, O. Ramon, M. Jacques, C. Rossion, B. (2009). Early adaptation to repeated unfamiliar faces across viewpoint changes in the right hemisphere: Evidence from the N170 ERP component. Neuropsychologia, 47, 639–643. [PubMed] [CrossRef] [PubMed]
Campanella, S. Hanoteau, C. Depy, D. Rossion, B. Bruyer, R. Crommelinck, M. (2000). Right N170 modulation in a face discrimination task: An account for categorical perception of familiar faces. Psychophysiology, 37, 796–806. [PubMed] [CrossRef] [PubMed]
de Heering, A. Rossion, B. Turati, C. Simion, F. (2008). Holistic face processing can be independent of gaze behavior: Evidence from the face composite effect. Journal of Neuropsychology, 2, 183–195. [CrossRef] [PubMed]
Eimer, M. (2000). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport, 11, 2319–2324. [PubMed] [CrossRef] [PubMed]
Ewbank, M. P. Smith, W. A. P. Hancock, E. R. Andrews, T. J. (2008). The M170 reflects a viewpoint-dependent representation for both familiar and unfamiliar faces. Cerebral Cortex, 18, 364–370. [PubMed] [CrossRef] [PubMed]
Farah, M. J. Wilson, K. D. Drain, M. Tanaka, J. N. (1998). What is “special” about face perception? Psychological Review, 105, 482–498. [PubMed] [CrossRef] [PubMed]
Flavell, J. H. Draguns, J. (1957). A microgenetic approach to perception and thought. Psychological Bulletin, 54, 197–217. [PubMed] [CrossRef] [PubMed]
Foxe, J. J. Schroeder, C. E. (2005). The case for feedforward multisensory convergence during early cortical processing. Neuroreport, 16, 419–423. [PubMed] [CrossRef] [PubMed]
Galton, F. (1883). Inquiries into human faculty and its development. London: Macmillan.
Gauthier, I. Tarr, M. J. Moylan, J. Skudlarski, P. Gore, J. C. Anderson, A. W. (2000). The fusiform “face area” is part of a network that processes faces at the individual level. Journal of Cognitive Neuroscience, 12, 495–504. [PubMed] [CrossRef] [PubMed]
George, N. Jemel, B. Fiori, N. Chaby, L. Renault, B. (2005). Electrophysiological correlates of facial decision: Insights from upright and upside-down Mooney-face perception. Brain Research: Cognitive Brain Research, 24, 663–673. [PubMed] [CrossRef] [PubMed]
Goffaux, V. Rossion, B. (2006). Faces are “spatial”—Holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology-Human Perception and Performance, 32, 1023–1039. [PubMed] [CrossRef] [PubMed]
Halgren, E. Raij, T. Marinkovic, K. Jousmäki, V. Hari, R. (2000). Cognitive response profile of the human fusiform face area as determined by MEG. Cerebral Cortex, 10, 69–81. [PubMed] [Article] [CrossRef] [PubMed]
Harel, A. Ullman, S. Epshtein, B. Bentin, S. (2007). Mutual information of image fragments predicts categorization in humans: Electrophysiological and behavioral evidence. Vision Research, 47, 2010–2020. [PubMed] [CrossRef] [PubMed]
Harris, A. Aguirre, G. K. (2008). The representation of parts and wholes in face-selective cortex. Journal of Cognitive Neuroscience, 20, 863–878. [PubMed] [CrossRef] [PubMed]
Harris, A. Nakayama, K. (2008). Rapid adaptation of the M170 response: Importance of face parts. Cerebral Cortex, 18, 467–476. [PubMed] [CrossRef] [PubMed]
Haxby, J. V. Hoffman, E. A. Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233. [PubMed] [CrossRef] [PubMed]
Heisz, J. J. Watter, S. Shedden, J. M. (2006). Progressive N170 habituation to unattended repeated faces. Vision Research, 46, 47–56. [PubMed] [CrossRef] [PubMed]
Herrmann, M. J. Ehlis, A. C. Ellgring, H. Fallgatter, A. J. (2005). Early stages (P100) of face perception in humans as measured with event-related potentials (ERPs). Journal of Neural Transmission, 112, 1073–1081. [PubMed] [CrossRef] [PubMed]
Hillger, L. A. Koenig, O. (1991). Separable mechanisms in face processing—Evidence from hemispheric-specialization. Journal of Cognitive Neuroscience, 3, 42–58. [CrossRef] [PubMed]
Hochstein, S. Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791–804. [PubMed] [CrossRef] [PubMed]
Hulten, P. (1987). The Arcimboldo effect: Transformations of the face from the 16th to the 20th century. New York: Abbeville Press.
Itier, R. J. Alain, C. Sedore, K. McIntosh, A. R. (2007). Early face processing specificity: It's in the eyes! Journal of Cognitive Neuroscience, 19, 1815–1826. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2002). Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. Neuroimage, 15, 353–372. [PubMed] [CrossRef] [PubMed]
Jacques, C. d'Arripe, O. Rossion, B. (2007). The time course of the inversion effect during individual face discrimination. Journal of Vision, 7(8):3, 1–9, http://journalofvision.org/7/8/3/, doi:10.1167/7.8.3. [PubMed] [Article] [CrossRef] [PubMed]
Jacques, C. Rossion, B. (2006). The speed of individual face categorization. Psychological Science, 17, 485–492. [PubMed] [CrossRef] [PubMed]
Jeffreys, D. A. (1996). Evoked potential studies of face and object processing. Visual Cognition, 3, 1–38. [CrossRef]
Jemel, B. Pisani, M. Calabria, M. Crommelinck, M. Bruyer, R. (2003). Is the N170 for faces cognitively penetrable Evidence from repetition priming of Mooney faces of familiar and unfamiliar persons. Cognitive Brain Research, 17, 431–446. [PubMed] [CrossRef] [PubMed]
Jiang, X. Rosen, E. Zeffiro, T. VanMeter, J. Blanz, V. Riesenhuber, M. (2006). Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques. Neuron, 50, 159–172. [PubMed] [CrossRef] [PubMed]
Kanwisher, N. McDermott, J. Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. [PubMed] [Article] [PubMed]
Kiani, R. Esteky, H. Tanaka, K. (2005). Differences in onset latency of macaque inferotemporal neural responses to primate and non-primate faces. Journal of Neurophysiology, 94, 1587–1596. [PubMed] [CrossRef] [PubMed]
Kovacs, G. Zimmer, M. Banko, E. Harza, I. Antal, A. Vidnyanszky, Z. (2006). Electrophysiological correlates of visual adaptation to faces and body parts in humans. Cerebral Cortex, 16, 742–753. [PubMed] [CrossRef] [PubMed]
Lamme, V. A. F. Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23, 571–579. [PubMed] [CrossRef] [PubMed]
Latinus, M. Taylor, M. J. (2005). Holistic processing of faces: Learning effects with Mooney faces. Journal of Cognitive Neuroscience, 17, 1316–1327. [PubMed] [CrossRef] [PubMed]
Le Grand, R. Mondloch, C. J. Maurer, D. Brent, H. P. (2004). Impairment in holistic face processing following early visual deprivation. Psychological Science, 15, 762–768. [PubMed] [CrossRef] [PubMed]
Lehmann, D. Skrandies, W. (1980). Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroencephalography and Clinical Neurophysiology, 48, 609–621. [PubMed] [CrossRef] [PubMed]
Letourneau, S. M. Mitchell, T. V. (2008). Behavioral and ERP measures of holistic face processing in a composite task. Brain and Cognition, 67, 234–245. [PubMed] [CrossRef] [PubMed]
Loftus, G. R. Masson, M. E. J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin & Review, 476–490.
Maurer, D. Le Grand, R. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
Mooney, C. M. (1957). Age in the development of closure ability in children. Canadian Journal of Psychology, 11, 219–226. [PubMed] [CrossRef] [PubMed]
Moore, C. Cavanagh, P. (1998). Recovery of 3D volume from 2-tone images of novel objects. Cognition, 67, 45–71. [PubMed] [CrossRef] [PubMed]
Mumford, D. (1992). On the computational architecture of the neocortex: II. The role of cortico-cortical loops. Biological Cybernetics, 66, 241–51. [PubMed] [CrossRef] [PubMed]
Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383. [CrossRef]
Nichols, T. E. Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: A primer with examples. Human Brain Mapping, 15, 1–25. [PubMed] [CrossRef] [PubMed]
Polich, J. (2007). Updating P300: An integrative theory of P3a and P3b. Clinical Neurophysiology, 118, 2128–2148. [PubMed] [CrossRef] [PubMed]
Richler, J. J. Gauthier, I. Wenger, M. J. Palmeri, T. J. (2008). Holistic processing of faces: Perceptual and decisional components. Journal of Experimental Psychology: Learning Memory and Cognition, 34, 328–342. [PubMed] [CrossRef]
Rossion, B. (2008a). Constraining the cortical face network by neuroimaging studies of acquired prosopagnosia. Neuroimage, 40, 423–426. [PubMed] [CrossRef]
Rossion, B. (2008b). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 128, 274–289. [PubMed] [CrossRef]
Rossion, B. Boremanse, A. (2008). Nonlinear relationship between holistic processing of individual faces and picture-plane rotation: Evidence from the face composite illusion. Journal of Vision, 8(4):3, 1–13, http://journalofvision.org/8/4/3/, doi:10.1167/8.4.3. [PubMed] [Article] [CrossRef] [PubMed]
Rossion, B. Delvenne, J. F. Debatisse, D. Goffaux, V. Bruyer, R. Crommelinck, M. (1999). Spatio-temporal localization of the face inversion effect: An event-related potentials study. Biological Psychology, 50, 173–189. [PubMed] [CrossRef] [PubMed]
Rossion, B. Dricot, L. Devolder, A. Bodart, J. M. Crommelinck, M. De Gelder, B. (2000). Hemispheric asymmetries for whole-based and part-based face processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 12, 793–802. [PubMed] [CrossRef] [PubMed]
Rossion, B. Jacques, C. (2008). Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain Ten lessons on the N170. Neuroimage, 39, 1959–1979. [PubMed] [CrossRef] [PubMed]
Rousselet, G. A. Husk, J. S. Bennett, P. J. Sekuler, A. B. (2008). Time course and robustness of ERP object and face differences. Journal of Vision, 8(12):3, 1–18, http://journalofvision.org/8/12/3/, doi:10.1167/8.12.3. [PubMed] [Article] [CrossRef] [PubMed]
Rousselet, G. A. Mace, M. J. M. Fabre-Thorpe, M. (2003). Is it an animal Is it a human face Fast processing in upright and inverted natural scenes. Journal of Vision, 3(6):5, 440–455, http://journalofvision.org/3/6/5/, doi:10.1167/3.6.5. [PubMed] [Article] [CrossRef]
Schiltz, C. Rossion, B. (2006). Faces are represented holistically in the human occipito-temporal cortex. Neuroimage, 32, 1385–1394. [PubMed] [CrossRef] [PubMed]
Schweinberger, S. R. Huddy, V. Burton, A. M. (2004). N250r: A face-selective brain response to stimulus repetitions. Neuroreport, 15, 1501–1505. [PubMed] [CrossRef] [PubMed]
Schyns, P. G. Jentzsch, I. Johnson, M. Schweinberger, S. R. Gosselin, F. (2003). A principled method for determining the functionality of brain responses. Neuroreport, 14, 1665–1669. [PubMed] [CrossRef] [PubMed]
Sergent, J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. [PubMed] [CrossRef] [PubMed]
Sergent, J. Ellis, H. D. Jeeves, M. A. Newcombe, F. Young, A. M. (1986). Microgenesis of face perception. Aspects of face processing. (pp. 17–33). Dordrecht: Martinus Nijhoff.
Sergent, J. Signoret, J. L. (1992). Functional and anatomical decomposition of face processing—Evidence from prosopagnosia and pet study of normal subjects. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences, 335, 55–62. [PubMed] [CrossRef]
Sugase, Y. Yamane, S. Ueno, S. Kawano, K. (1999). Global and fine information coded by single neurons in the temporal visual cortex. Nature, 400, 869–873. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 46, 225–245. [PubMed] [CrossRef]
Tanskanen, T. Nasanen, R. Montez, T. Paallysaho, J. Hari, R. (2005). Face recognition and cortical responses show similar sensitivity to noise spatial frequency. Cerebral Cortex, 15, 526–534. [PubMed] [CrossRef] [PubMed]
Thorpe, S. Fize, D. Marlot, C. (1996). Speed of processing in the human visual system. Nature, 381, 520–522. [PubMed] [CrossRef] [PubMed]
Ullman, S. (2007). Object recognition and segmentation by a fragment-based hierarchy. Trends in Cognitive Sciences, 11, 58–64. [PubMed] [CrossRef] [PubMed]
Wallis, G. Siebeck, U. E. Swann, K. Blanz, V. Bulthoff, H. H. (2008). The prototype effect revisited: Evidence for an abstract feature model of face recognition. Journal of Vision, 8(3):20, 1–15, http://journalofvision.org/8/3/20/, doi:10.1167/8.3.20. [PubMed] [Article] [CrossRef] [PubMed]
Young, A. W. Hellawell, D. Hay, D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. [PubMed] [CrossRef] [PubMed]
Zion-Golumbic, E. Bentin, S. (2007). Dissociated neural mechanisms for face detection and configural encoding: Evidence from N170 and induced gamma-band oscillation effects. Cerebral Cortex, 17, 1741–1749. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Stimuli and time-line of the experiment. A. Examples of pairs of composite faces (adapting and test) for the 6 conditions used in the experiment. Note that the top parts of the composite faces in middle row are perceived as being slightly different in the aligned face format (left) despite the fact that they are identical and that only the bottom parts of the two composite faces are different (= composite face illusion). Notably, this illusion does not occur in the misaligned format (right). B. Time-line of the stimulus sequence in each trial.
Figure 1
 
Stimuli and time-line of the experiment. A. Examples of pairs of composite faces (adapting and test) for the 6 conditions used in the experiment. Note that the top parts of the composite faces in middle row are perceived as being slightly different in the aligned face format (left) despite the fact that they are identical and that only the bottom parts of the two composite faces are different (= composite face illusion). Notably, this illusion does not occur in the misaligned format (right). B. Time-line of the stimulus sequence in each trial.
Figure 2
 
Behavioral results for the matching task of composite faces. A. Error rates are shown on the left and response times on the right of the figure. Error bars are standard errors. B. Distribution of correct (full lines) and incorrect (dashed lines) response times in the composite task, separately for same and bottom-different conditions in the aligned and misaligned face format. The number of responses in successive 20 ms time-bins is plotted as a function of time from stimulus (test face) onset. Numbers in parenthesis represent respectively the mean and median response times over all correct trials. For aligned trials, the composite face illusion affected even the very early part of the distribution.
Figure 2
 
Behavioral results for the matching task of composite faces. A. Error rates are shown on the left and response times on the right of the figure. Error bars are standard errors. B. Distribution of correct (full lines) and incorrect (dashed lines) response times in the composite task, separately for same and bottom-different conditions in the aligned and misaligned face format. The number of responses in successive 20 ms time-bins is plotted as a function of time from stimulus (test face) onset. Numbers in parenthesis represent respectively the mean and median response times over all correct trials. For aligned trials, the composite face illusion affected even the very early part of the distribution.
Figure 3
 
Grand average ERP waveforms elicited by the test face and histograms of the N170 amplitude. A. Grand average ERP waveforms elicited by the test face in the aligned (top row) and misaligned (bottom row) face format at occipito-temporal electrodes PO7 and PO8 (left and right hemisphere respectively). The ERP to the same, bottom different and top + bottom different conditions are displayed for each format. B. Histograms of the amplitude of the N170 in the different conditions averaged over 5 electrodes in each hemisphere (see methods). C. Histograms of the N170 amplitude difference (averaged over the 5 electrodes in each hemisphere) between bottom different and same conditions and between top + bottom different and same conditions as a function of format and hemisphere. Error bars are standard errors of the mean computed after normalizing the data to remove subject variability (Loftus & Masson, 1994).
Figure 3
 
Grand average ERP waveforms elicited by the test face and histograms of the N170 amplitude. A. Grand average ERP waveforms elicited by the test face in the aligned (top row) and misaligned (bottom row) face format at occipito-temporal electrodes PO7 and PO8 (left and right hemisphere respectively). The ERP to the same, bottom different and top + bottom different conditions are displayed for each format. B. Histograms of the amplitude of the N170 in the different conditions averaged over 5 electrodes in each hemisphere (see methods). C. Histograms of the N170 amplitude difference (averaged over the 5 electrodes in each hemisphere) between bottom different and same conditions and between top + bottom different and same conditions as a function of format and hemisphere. Error bars are standard errors of the mean computed after normalizing the data to remove subject variability (Loftus & Masson, 1994).
Figure 4
 
Histograms of the amplitude of the P1 in the different conditions averaged over 5 electrodes in each hemisphere (see Methods).
Figure 4
 
Histograms of the amplitude of the P1 in the different conditions averaged over 5 electrodes in each hemisphere (see Methods).
Figure 5
 
Subtraction waveforms comparing same to top + bottom different conditions (left) and comparing same to bottom different conditions (right) in the aligned (top row) and misaligned (bottom row) format for 58 electrodes. Highlighted electrodes are three pairs of occipito-temporal (OT) electrodes in the left (dotted black lines) and right (full black lines) hemispheres. The location of these three electrodes is shown on the topographic maps (view from above) in each plot. Each map shows the distribution of ERP difference at 200 ms after stimulus onset. The amplitude scale for the topographical maps is [−2, 2] for the left column and [−1, 1] for the right column.
Figure 5
 
Subtraction waveforms comparing same to top + bottom different conditions (left) and comparing same to bottom different conditions (right) in the aligned (top row) and misaligned (bottom row) format for 58 electrodes. Highlighted electrodes are three pairs of occipito-temporal (OT) electrodes in the left (dotted black lines) and right (full black lines) hemispheres. The location of these three electrodes is shown on the topographic maps (view from above) in each plot. Each map shows the distribution of ERP difference at 200 ms after stimulus onset. The amplitude scale for the topographical maps is [−2, 2] for the left column and [−1, 1] for the right column.
Figure 6
 
Time by electrode statistical plots of the significant ERP differences between conditions. Left column: statistical plots of the significant differences ( p < 0.01; two-tailed; 5000 permutations) between same and top + bottom different conditions in the aligned (top row) and misaligned (bottom row) format. Right column: significant differences between same and bottom different in the aligned (top row) and misaligned (bottom row) format. Only significant differences are color-coded as a function of the amplitude of the difference between the ERP waveforms compared. The 58 electrodes are represented on the y-axis and grouped as a function of their location in frontal (F), central (C) and posterior (P) scalp region, as well as left hemisphere (L), midline (M) and right hemisphere (R). Topographic maps (view from above the head) of the significant differences at five different time points are displayed next to each plot. Each map is an average of 20 ms ERP signal and the black arrows on the lower × axis of each plot indicate the temporal location of each topographic map. Note that the amplitude scale for maps [160–180], [180–200], [200–220] and [260–280] ms is [−1.3, 1.3] μV and the scale for the [400–420] ms map is [−2.5, 2.5] μV.
Figure 6
 
Time by electrode statistical plots of the significant ERP differences between conditions. Left column: statistical plots of the significant differences ( p < 0.01; two-tailed; 5000 permutations) between same and top + bottom different conditions in the aligned (top row) and misaligned (bottom row) format. Right column: significant differences between same and bottom different in the aligned (top row) and misaligned (bottom row) format. Only significant differences are color-coded as a function of the amplitude of the difference between the ERP waveforms compared. The 58 electrodes are represented on the y-axis and grouped as a function of their location in frontal (F), central (C) and posterior (P) scalp region, as well as left hemisphere (L), midline (M) and right hemisphere (R). Topographic maps (view from above the head) of the significant differences at five different time points are displayed next to each plot. Each map is an average of 20 ms ERP signal and the black arrows on the lower × axis of each plot indicate the temporal location of each topographic map. Note that the amplitude scale for maps [160–180], [180–200], [200–220] and [260–280] ms is [−1.3, 1.3] μV and the scale for the [400–420] ms map is [−2.5, 2.5] μV.
Figure A1
 
Statistical plots of the comparison between same and bottom different conditions in the aligned format, performed on ERP data corrected for a P1 latency difference (right) or not corrected for this difference (left). Note that the differences observed in the 120–140 ms time-window mostly over left posterior electrodes in the uncorrected data are no longer present in the corrected comparison. Only significant differences ( p < 0.01; two-tailed; 5000 permutations) are color-coded as a function of the amplitude of the difference between the ERP waveforms compared.
Figure A1
 
Statistical plots of the comparison between same and bottom different conditions in the aligned format, performed on ERP data corrected for a P1 latency difference (right) or not corrected for this difference (left). Note that the differences observed in the 120–140 ms time-window mostly over left posterior electrodes in the uncorrected data are no longer present in the corrected comparison. Only significant differences ( p < 0.01; two-tailed; 5000 permutations) are color-coded as a function of the amplitude of the difference between the ERP waveforms compared.
Supplementary Figure 1
Supplementary Figure 2
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×