Free
Research Article  |   April 2008
Nonlinear relationship between holistic processing of individual faces and picture-plane rotation: Evidence from the face composite illusion
Author Affiliations
Journal of Vision April 2008, Vol.8, 3. doi:https://doi.org/10.1167/8.4.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bruno Rossion, Adriano Boremanse; Nonlinear relationship between holistic processing of individual faces and picture-plane rotation: Evidence from the face composite illusion. Journal of Vision 2008;8(4):3. https://doi.org/10.1167/8.4.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is well known that the integration of facial features into a holistic representation is dramatically disrupted by picture-plane inversion. To investigate the nature of this observation, we tested for the first time the so-called face composite effect at various angles of rotation (0° to 180°, 7 angles). During an individual face matching task, subjects perceived two identical top halves of the same face as being slightly different (increase of error rates and RTs) when they were aligned with different bottom parts. This face composite illusion was equally strong for stimuli presented at 0° until 60° rotation, then fell off dramatically at 90° and remained stable until complete inversion of the stimulus. The non-linear relationship between orientation and holistic processing supports the view that inversion affects face processing qualitatively. Most importantly, it rules out the hypothesis that misoriented faces are perceptually realigned by means of linear rotation mechanisms independent of internal representations derived from experience. Altogether, these observations suggest that a substantial part of the face inversion effect is accounted for by the inability to apply an experience-derived holistic representation to an incoming visual face stimulus that it is rotated horizontally or beyond that orientation.

Introduction
Compared to other classes of objects, recognition of faces is disproportionately affected by inversion in the picture plane (a vertical flip of the stimulus or a 180° rotation). This phenomenon known as the “face inversion effect” (FIE) (Yin, 1969) is considered as one of the strongest evidence that faces are in some way “special,” i.e., their recognition would involve processes that are not—or less—solicited for the recognition of other classes of visual stimuli. 
An early and still ongoing debate in the face literature is whether inversion affects face processing qualitatively or quantitatively
A majority of authors follow early proposals (Carey & Diamond, 1977; Diamond & Carey, 1986; Goldstein & Chance, 1980; Yin, 1969) that inversion affects face processing qualitatively. That is, for the same amount of transformation (inversion), certain face processes are affected, while others remain unaffected, or less affected (i.e., a non-linear effect). Proponents of this view have suggested that inversion affects mainly the integration of features into a so-called holistic representation (one in which face parts are integrated and interdependent, see e.g., Farah, Tanaka, & Drain, 1995; Sergent, 1984; Tanaka & Farah, 1993). Other authors have suggested that inversion affects mainly the processing of metric distances between features (“configural information,” i.e., mouth-nose distance, inter-ocular distance…) (Diamond & Carey, 1986; Leder & Bruce, 1998; Rhodes, 1988). Irrespective of this conceptual distinction, many studies have shown indeed that inversion affects more the processing of the spatial relationships between features (both their integration into a holistic representation and their metric distances) than of the local features themselves (for empirical evidence, see e.g., Bartlett & Searcy, 1993; Barton, Keenan, & Bass, 2001; Collishaw & Hole, 2000; Farah et al., 1995; Freire, Lee, & Symons, 2000; Leder & Bruce, 1998, 2000; Leder, Candrian, Huber, & Bruce, 2001; Le Grand, Mondloch, Maurer, & Brent, 2001; Rhodes, Brake, & Atkinson, 1993; Sergent, 1984; Tanaka & Farah, 1993; Thompson, 1980; Young, Hellawell, & Hay, 1987), supporting the qualitative view of the FIE. 
In contrast to this qualitative view, it has been proposed that inversion affects face processing quantitatively, i.e., all kinds of face processes to the same extent. For instance, recent studies reported equal costs of inversion for the discrimination of the local features of a face as compared to the metric distances between features (Riesenhuber, Jarudi, Gilad, & Sinha, 2004; Yovel & Kanwisher, 2004). However, these peculiar results are not due to equalizing performance for upright faces or randomization of conditions but may be rather due to a lack that of independent manipulations of features and their distances (Riesenhuber et al., 2004) and the use of few repeated stimuli (Yovel & Kanwisher, 2004) (see Goffaux & Rossion, 2007). In the same vein, a recent response classification experiment with faces embedded in noise showed that the very same features, mostly the eyes and eyebrows, are used to discriminate upright and inverted faces (Sekuler, Gaspar, Gold, & Bennett, 2004). This is in line with eye movement recording data, showing that the same features, mainly the eyes, are fixated for upright and inverted faces (Williams & Henderson, 2007). However, these studies do not truly allow to measure the integration of features and may force subjects to rely only on local diagnostic features of a limited set of faces by revealing only local high contrast areas (eyes and eyebrows) surviving a large amount of noise (Sekuler et al., 2004). Hence, they do not provide solid evidence against the qualitative view of face inversion. 
An older and still unresolved argument against the qualitative view of face inversion was advanced originally by Valentine (1988), who reasoned that if inversion affected face processing qualitatively, there should be a sudden shift in the function relating the face processing performance and the angle of rotation of the face, from 0° to 180°. That is, the variable measured during a face processing task (e.g., accuracy, correct response time) would show a non-linear relationship with the angle of rotation, the departure from linearity occurring at the angle at which the process of interest (i.e., feature integration) could not be applied to the face anymore. Valentine and Bruce (1988) tested this hypothesis in a number of experiments using faces presented at 45° increments (45°, 90°, 135°, and 180°). They found a strict linear relationship between response time and the degree of disorientation that they interpreted as supporting the quantitative view of the FIE. Based on these observations, these authors proposed an account of the face inversion effect in terms of mental rotation: The visual input would have to be normalized (i.e., mentally rotated in the case of orientation manipulation) to produce perceptual outputs that could then be matched to representations stored in memory (Valentine & Bruce, 1988, see also Rock, 1973). Given that all visual objects suffer from picture-plane and depth rotation costs during their recognition (e.g., Jolicoeur, Regehr, Smith, & Smith, 1985, 1990; Lawson, 1999; Tarr & Pinker, 1989), such normalization processes were postulated as general mechanisms for object recognition (e.g., Jolicoeur, 1990; Tarr, 1995). Hence, according to Valentine (1988), the face inversion effect would not provide any evidence for a unique process in face recognition. In this framework, larger costs of inversion for faces than objects may arise because faces are particularly homogeneous and complex stimuli made of multiple features, whose relationships may be significantly distorted when they are mentally rotated (Collishaw & Hole, 2002; Rock, 1973; Schwaninger & Mast, 2005; Valentine, 1988). 
However, the results from other studies manipulating face rotation across multiple angles have been rather mixed. Some studies indeed reported a strict linear relationship between the measured variable of interest and the angle of rotation (Bruyer, Galvez, & Prairial, 1993; Collishaw & Hole, 2002; Schwaninger & Mast, 2005; Sjoberg & Windes, 1992; Valentine & Bruce, 1988), but others reported departures from linearity around the horizontal plane (90°; Jacques & Rossion, 2007; Lewis, 2001; McKone, 2004; McKone, Martini, & Nakayama, 2001; Murray, Yong, & Rhodes, 2000; Stürzel & Spillmann, 2002). One reason for this disparity among studies may be that some experiments used subjective judgment tasks (McKone, 2004; Murray et al., 2000; Sjoberg & Windes, 1992; Stürzel & Spillmann, 2002) or tasks that did not tap into facial identity processing (e.g., Bruyer et al., 1993) rather than asking participants to discriminate individual faces, a task that is most sensitive to the inversion effect (Jacques & Rossion, 2007; Rossion & Gauthier, 2002). Moreover, most studies report only performance or judgment measures, without considering response times (e.g., Lewis, 2001; McKone, 2004; McKone et al., 2001; Murray et al., 2000; Stürzel & Spillmann, 2002) or even the opposite (Valentine & Bruce, 1988), preventing to assess possible trade-offs between the two variables. Finally and most importantly, a major reason for the discrepancies observed among these studies, noted by several authors (Collishaw & Hole, 2002; McKone et al., 2001) may be the lack of isolation of the process of interest. That is, in most of the studies cited above, the function relating orientation and performance may just be an average slope representing several processes at play (e.g., extraction of shape and surface information; featural and holistic processing,…). If one aims at testing whether a given process (i.e., feature integration) is gradually or suddenly affected with the increasing angles of rotation of a stimulus, this process of interest must be tested in isolation. 
The general goal of the present study was to clarify this issue, not only to provide further support to the qualitative view of the FIE but also to shed light on the nature of holistic face processing. Specifically, we presented face photographs at multiple angles of rotation during an individual face discrimination task, measuring both performance and correct RTs, to characterize how inversion affects the holistic processing of an individual face stimulus. The term “holistic” here does not refer to a particular kind of information that can be measured and manipulated on the face stimulus (e.g., a metric distance between features) but rather to a process and/or a representation of this stimulus (Sergent, 1984; Tanaka & Farah, 1993). A simple and widely accepted definition of holistic face processing at this stage would be the following: Rather than being processed independently from each other, facial features are integrated into an individual perceptual representation of the whole face. Consequently, these features, even when a subset of them are presented, influence each other during the processing of the face stimulus. A number of classical behavioral experiments have provided evidence for this interdependence of features when processing individual faces, i.e., for holistic face processing (e.g., Sergent, 1984; Tanaka & Farah, 1993; Tanaka & Sengco, 1997; Thompson, 1980), but the most compelling evidence certainly comes from the so-called face composite effect (Young et al., 1987). This effect was first described (Young et al., 1987) as the slowing down at naming the top half of a familiar face (cut below the eyes) when it is aligned with the bottom part of another face than when the same top and the bottom parts are offset laterally (i.e., misaligned). Most recently, it has been shown with unfamiliar faces in individual discrimination tasks: Two identical top halves of a face are treated as being different if they are aligned with different bottom parts, even when the bottom parts are irrelevant and not attended to (e.g., de Heering, Rossion, Turati, & Simion, 2008; Endo et al., 1989; Goffaux & Rossion, 2006; Hole, 1994; Hole, George, & Dunsmore, 1999; Le Grand, Mondloch, Maurer, & Brent, 2004; Michel, Rossion, Han, Chung, & Caldara, 2006; Robbins & McKone, 2007). It is a visual illusion (see Figure 1) and a particularly nice demonstration that facial features (here the two halves of the face) cannot be processed in isolation, i.e., that they interact with each other during the processing of the face. 
Figure 1
 
The face composite illusion. (Above) All top halves (above the white line) are strictly identical, but when they are aligned with distinct bottom parts (top row), they appear as slightly different. This illusion reflects the perception of the face stimulus as an integrated whole. When the two halves of the face are misaligned (bottom row), the illusion vanishes.
Figure 1
 
The face composite illusion. (Above) All top halves (above the white line) are strictly identical, but when they are aligned with distinct bottom parts (top row), they appear as slightly different. This illusion reflects the perception of the face stimulus as an integrated whole. When the two halves of the face are misaligned (bottom row), the illusion vanishes.
When faces are inverted, the composite effect either disappears or is attenuated (e.g., Goffaux & Rossion, 2006; Young et al., 1987; see Figure 2). However, it has not yet been tested whether holistic processing is disrupted gradually by picture-plane rotation of the face stimulus or if there is a sudden shift in the function relating the effect to the angles of rotation of the face. 
Figure 2
 
(A) Experimental design. Subjects had to concentrate on the top part of the two faces presented sequentially and decide as accurately and fast as possible whether they were identical or not. Top and bottom parts were either aligned for the two faces of a pair or misaligned. (B) Illustration of the illusion at different angles of rotation. Only angles ranging from 180° to 0° were used in the experiment (see procedure section).
Figure 2
 
(A) Experimental design. Subjects had to concentrate on the top part of the two faces presented sequentially and decide as accurately and fast as possible whether they were identical or not. Top and bottom parts were either aligned for the two faces of a pair or misaligned. (B) Illustration of the illusion at different angles of rotation. Only angles ranging from 180° to 0° were used in the experiment (see procedure section).
Besides clarifying the debate about the qualitative/quantitative view of the FIE, characterizing this function may have important theoretical implications. First, describing the pattern of disruption of holistic processing with angles of stimulus rotation for normal viewers would provide a way to investigate the relationship between holistic processing and the processing of the metric distances between facial features (e.g., interocular distance), often referred to as “configural information” (Rhodes, Brennan, & Carey, 1987; for a conceptual distinction between the two notions, see Maurer, Le Grand, & Mondloch, 2002; Rossion & Gauthier, 2002). Indeed, a recent study found a massive drop of performance and RT increase between 60° and 90° in an individual discrimination task when metric distances between features are manipulated (Schwaninger & Mast, 2005). Observing a similar or distinct pattern in a paradigm that taps into holistic processing of individual faces with the same variables would be important in our understanding of the relationship between these two notions, often confounded in the literature. Second and perhaps most importantly, observing a non-linear function between orientation and the amount of holistic face processing would rule out the view that normalization processes such as mental rotation accounts for the face inversion effect (Collishaw & Hole, 2002; Schwaninger & Mast, 2005; Valentine & Bruce, 1988) and would inform about the nature of holistic face processing. Indeed, if holistic face processing is specifically disrupted at an angle that is relatively close to the upright orientation, this will indicate that when faces are presented at orientations that are not experienced in our daily life, they are not perceived holistically. This would strongly suggest a role of visual experience (i.e., internal representations) in holistic face processing of individual faces, a function that takes place in high-level visual areas (Schiltz & Rossion, 2006). 
In the experiment reported here, participants were presented with composite face stimuli during a delayed individual matching task on top parts of faces. Top and bottom segments of the face could be aligned or misaligned, and the stimuli were presented at 7 angles of rotation, from 0° to 180°. We hypothesized to observe a large face composite effect at 0°, subjects being slower or making more mistakes in the aligned than the misaligned condition, and a reduced effect at 180° (Goffaux & Rossion, 2006). Most importantly, in line with the qualitative view of the FIE, we expected a non-linear relationship between the angle of rotation and the amount of face composite effect. 
Methods
Subjects
Eighteen participants (17 females, mean age = 20.4 years, range: 18–24 years) took part in the experiment in exchange for credits. They all had normal or corrected-to-normal vision. 
Stimuli
We used 30 grayscale full-front pictures of unfamiliar faces (neutral expression, 17 females, 13 males, no facial hair, no glasses, and no external features; see Figures 1 and 2). Pictures were approximately 5.5/6.5 cm wide and 8.5/9.5 cm high (at a distance of 57 cm, same values in degrees of visual angle). Using Adobe Photoshop, we separated the top and the bottom parts of the original faces by inserting a small gap (1.8 mm) on the tip of the nose ( Figures 1 and 2). Top and bottom halves belonged to the same original face (Top01Bottom01, T02B02, etc.…). A misaligned version of each face was created by shifting the bottom half to the right, so that the middle of the nose was vertically aligned with the extreme right side of the bottom part (see Figure 1). These original faces, aligned and misaligned, were always used as the first stimulus presented in the delayed matching task (see below). The top part of each face was then paired with the bottom part of another face to create 30 combinations (Top01Bottom02, T02B03, etc.…). Trials requiring a “same” response on the top half always had a different bottom half (e.g., T01B01 followed by T01B02). We selected 27 combinations out of the 30 possible. Twenty-seven trials aligned and 27 misaligned for each orientation were used, making 378 trials in total (27 × 2 × 7). In addition, 11 trials by orientation requested a “different” response. They were made by combining 11 pairs of stimuli (e.g., T01B01 followed by T02B02). Thus, there were 156 trials (11 × 2 × 7) requiring a “different” response and 532 trials in total. As in most studies (Goffaux & Rossion, 2006; Le Grand et al., 2004; Michel et al., 2006), we used the top part as target because the composite effect is stronger than on the bottom part (Young et al., 1987). The face composite effect is revealed by the difference in performance between aligned and misaligned faces for “same” trials, not “different” trials (Goffaux & Rossion, 2006; Le Grand et al., 2004; Michel et al., 2006; Robbins & McKone, 2007). That is, “same” top parts are perceived erroneously as being “different” when associated with different bottom parts, but different top parts are not perceived as more identical when associated with identical bottom parts. Accordingly, to collect a large amount of data directly relevant for the face composite effect while limiting the total duration of the experiment, only 30% of trials required a “different” response and 70% required a “same” response from the subject, as done previously (Michel et al., 2006). 
Procedure
All subjects were tested individually and in the same conditions. Each trial began with the presentation of a fixation-cross on the centre of the computer screen for 300 ms, followed by a 200-ms blank screen and a target face for 400 ms. After a 300-ms blank screen, a second stimulus was presented until the subject gave a response ( Figure 2). The first stimulus was a composite face, with the two parts aligned or misaligned. The second stimulus was aligned or misaligned, according to the first stimulus (aligned to aligned, misaligned to misaligned). The identity of the second stimulus could either be totally different from the first composite face (different top and bottom parts) or be different by its bottom part only (same top part). Participants were instructed to ignore the lower parts and to decide whether the top part of the second stimulus was the same as (“same” trials) or different (“different” trials) from the top part of the target. After six practice trials, subjects had to go through five blocks of 133 trials. The subject had received beforehand instructions, according to which he/she had to go as quickly as possible while avoiding unnecessary mistakes. In each trial, the faces were presented according to 7 possible orientations, from 0 to 180°. The stimuli were rotated counterclockwise (0°, 330°, 300°, 270°, 240°, 210°, and 180°) so that when rotated the eyes were presented in the left visual field, most sensitive for face processing (e.g., Hillger & Koenig, 1991). Thus, any decrease of performance with alignment between 0° and 180° could not be attributed to particularly diagnostic information (eyes/eyebrows) being presented in the right visual field. The second stimulus had the same alignment and the same orientation as the first one but was slightly shifted in position to avoid that subjects used local spots on the screen to discriminate the stimuli. Stimuli were presented, and accuracy and response times were collected using E-Prime 1.1. 
Results
Accuracy
Even though there was a smaller proportion of trials requiring a “different” than “same” response (30/70%), they gave rise to a consistent pattern of results ( Tables 1a and 1b). For the interest of the study, we included these “different” trials in a global analysis of variance (ANOVA) before splitting this analysis for “different” and “same” trials. 
Table 1a
 
Accuracy rates (±SE) for the different conditions of the experiment.
Table 1a
 
Accuracy rates (±SE) for the different conditions of the experiment.
Accuracy rates (%) Angles 30° 60° 90° 120° 150° 180°
Same trials Aligned 0.67 ± 0.03 0.66 ± 0.03 0.69 ± 0.03 0.73 ± 0.02 0.77 ± 0.02 0.79 ± 0.02 0.76 ± 0.03
Misaligned 0.89 ± 0.03 0.91 ± 0.03 0.91 ± 0.03 0.87 ± 0.03 0.87 ± 0.03 0.90 ± 0.03 0.92 ± 0.03
Composite effect (misaligned– aligned, %) 0.21 0.25 0.21 0.14 0.10 0.12 0.16
Different trials Aligned 0.94 ± 0.02 0.95 ± 0.02 0.89 ± 0.04 0.88 ± 0.04 0.82 ± 0.04 0.83 ± 0.04 0.75 ± 0.04
Misaligned 0.89 ± 0.04 0.90 ± 0.03 0.88 ± 0.04 0.86 ± 0.04 0.83 ± 0.04 0.81 ± 0.05 0.74 ± 0.05
Table 1b
 
Correct response times (ms) for the different conditions of the experiment.
Table 1b
 
Correct response times (ms) for the different conditions of the experiment.
Correct RTs (ms) Angles 30° 60° 90° 120° 150° 180°
Same trials Aligned 681 ± 37 676 ± 38 642 ± 39 611 ± 37 589 ± 33 573 ± 30 569 ± 28
Misaligned 558 ± 27 565 ± 30 555 ± 25 583 ± 36 546 ± 26 526 ± 25 540 ± 23
Composite effect (misaligned– aligned, RTs) 123 110 87 28 42 47 29
Different trials Aligned 592 ± 27 634 ± 32 622 ± 25 619 ± 27 634 ± 23 628 ± 23 650 ± 32
Misaligned 588 ± 26 626 ± 27 639 ± 27 623 ± 25 636 ± 23 623 ± 24 652 ± 35
Accuracy rates on all trials were thus submitted to a 2 × 2 × 7 (ANOVA) with identity (same or different), alignment (aligned vs. misaligned), and orientation (7 angles) as within-subjects factors. There was a main effect of alignment ( F(1,17) = 28.9, p < 0.0001), misaligned trials giving rise to a better performance than aligned trials, and of orientation ( F(6,102) = 3.19, p = 0.0066) due to the lower performances with increasing angles of rotation ( Figure 3). There was no main effect of identity ( F(1,17) = 1.4, p = 0.24). There were significant interactions between identity and both alignment ( F(1,17) = 74.8, p < 0.0001) and orientation ( F(6,102) = 12.2, p < 0.0001). Moreover, the triple interaction between identity, alignment, and orientation was highly significant ( F(6,102) = 4.29, p = 0.0007). 
Figure 3
 
Results for all the conditions of the experiment. (A) Accuracy rates; (B) correct RTs. For both measures, it is clear that alignment does not affect the trials for which a “different” response is expected: The data shows an almost perfect superimposition of the slopes with angles of rotation for “different” trials, with general decreases of performance and increases of RTs with angles of rotation. The interesting observations are made for “same” trials, for which the composite effect (difference between aligned and misaligned conditions) is maximal at orientations 0° to 60°, then sharply decreases at 90° to remain stable until the 180° orientation. Standard error bars are not included in the graph for sake of clarity but SE values are provided in Tables 1a and 1b.
Figure 3
 
Results for all the conditions of the experiment. (A) Accuracy rates; (B) correct RTs. For both measures, it is clear that alignment does not affect the trials for which a “different” response is expected: The data shows an almost perfect superimposition of the slopes with angles of rotation for “different” trials, with general decreases of performance and increases of RTs with angles of rotation. The interesting observations are made for “same” trials, for which the composite effect (difference between aligned and misaligned conditions) is maximal at orientations 0° to 60°, then sharply decreases at 90° to remain stable until the 180° orientation. Standard error bars are not included in the graph for sake of clarity but SE values are provided in Tables 1a and 1b.
The interaction between identity and alignment confirms that “same” and “different” trials are processed differently (see Figure 3), the composite effect being reflected by the difference of performance for “same” trials between the misaligned and the aligned conditions (Goffaux & Rossion, 2006; Le Grand et al., 2004; Michel et al., 2006; Robbins & McKone, 2007). Indeed, when the trials require a “different” response, the performance and the RT are strictly identical for aligned and misaligned conditions (Figure 3). This pattern simply reflects the nature of the illusion: Two identical top parts of a face are perceived as different if they are aligned with distinct bottom parts. However, there would be no reason to expect two different top parts to be perceived as identical if they were associated with identical bottom parts. These data indicate that the bottom part affects the perception of the top part of the face, leading to a response bias (less response “same”) in this paradigm. Thus, two ANOVAs were carried out separately for “same” and “different” trials, with the former being of major interest in the present study. 
For “same” trials, there were main effects of orientation and alignment ( F(1,17) = 108.8, p < 0.0001; F(6,102) = 6.91, p < 0.0001) and, critically, a significant interaction between the two factors ( F(6,102) = 9.11, p < 0.0001). The effect of Alignment (better performance for misaligned) appeared much larger at 0° and nearest orientations than between 90° and 180° ( Figure 3A). 
In contrast, for “different” trials, there was no effect of alignment (i.e., no composite effect; F(1,17) = 1.55, p > 0.22) and no interaction between alignment and orientation ( F(1,17) = 0.51, p > 0.79). There was only a significant main effect of orientation ( F(1,17) = 8.7, p < 0.0001), reflecting the decrease of performance with angles of rotation ( Figure 3A). 
These data show that the composite effect was present only for “same” trials, and it was sensitive to face orientation. To characterize the pattern of composite effect with angles of rotation of the faces, we ran an analysis on a composite index (aligned–misaligned) at every angle of rotation. The main effect of orientation of this index ( F(6,102) = 9.1, p < 0.0001) was decomposed in pairwise post hoc comparisons between adjacent angles (multiple comparisons, Bonferroni corrected by multiplying p-values by number of tests). Starting from upright faces, there were no significant differences between adjacent angles (0°–30°: p = 0.56; 30°–60°: p = 0.54) until the 60°–90° comparison ( p = 0.029), and then no further significant differences (90°–120°: p = 0.60; 120°–150°: p = 0.99; 150°–180°: 0.57). Thus, the data could be divided in two groups of orientations: above and below the 90 degrees rotation (0°–30°–60° vs. 90°–120°–150°–180°: F(1,17) = 21.9, p < 0.0001) ( Figure 4). Polynomial contrasts showed a significant linear component ( F(1,17) = 19.9, p < 0.0001) but also significant departures from linearity (quadratic component: F(1,17) = 5.24, p = 0.034; cubic component: F(1,17) = 10.95, p = 0.004). 
Figure 4
 
Amount of composite effect computed for accuracy rates and correct RTs. This display shows that the face composite effect was non-linearly related to the angles of rotation of the face stimulus, showing a massive drop of holistic processing between 60° and 90° of orientation.
Figure 4
 
Amount of composite effect computed for accuracy rates and correct RTs. This display shows that the face composite effect was non-linearly related to the angles of rotation of the face stimulus, showing a massive drop of holistic processing between 60° and 90° of orientation.
In contrast to these observations, for different trials, there was no effect of orientation at all on the composite index (aligned–misaligned; F(1,17) = 0.52, p = 0.80) and thus no significant polynomial component ( ps > 0.17). However, the main effect of orientation described above reflected a linear component ( F(1,17) = 30.1, p < 0.0001) only (all other components: ps > 0.16). 
Thus, the effect of orientation on “same” trials showed a non-linear relationship with alignment, while for “different” trials, there was only a main linear effect of orientation ( Figure 3A). 
Correct response times (RTs)
Correct RTs on all trials were also submitted to a 2 × 2 × 7 analysis of variance (ANOVA) with identity (same or different), alignment (aligned vs. misaligned), and orientation (7 angles) as within-subjects factors. There were again main effects of alignment ( F(1,17) = 14,72, p = 0.0013) and of orientation ( F(6,102) = 3,57, p = 0.003). There was also a main effect of identity (( F(1,17) = 7,9, p = 0.012). There were significant interactions between identity and both alignment (( F(1,17) = 28,802, p = 0.0001) and orientation ( F(6,102) = 9,828, p < 0.0001) and between alignment and orientation ( F(6,102) = 3,583, p = .0029). Moreover, the triple interaction between identity, alignment, and orientation was highly significant ( F(6,102) = 2,742, p = 0.0164). 
As for accuracy rates, we computed again two other ANOVAs separately for “same” and “different” trials. For “same” trials, there were main effects of orientation and alignment ( F(1,17) = 14,367, p < 0.0001; F(6,102) = 33.585, p < 0.0001) and, critically, a significant interaction between the two factors ( F(6,102) = 7.397, p < 0.0001). The effect of alignment (faster responses for misaligned) appeared much larger at 0° and nearest orientations than between 90° and 180° ( Figure 3B). 
In contrast, for “different” trials, there was no effect of alignment (i.e., no composite effect; F(1,17) = 0.0134, p > 0.91) and no interaction between alignment and orientation ( F(1,17) = 0.22, p > 0.97). There was only a significant main effect of orientation ( F(1,17) = 3.07, p = 0.0084), reflecting the decrease of performance with angles of rotation ( Figure 3B). 
RT data confirm that the composite effect was present only for same trials, and it was sensitive to face orientation. We ran an analysis on a composite index (aligned–misaligned) at every angle of rotation for correct RTs. The main effect of orientation on this index ( F(6,102) = 5.96, p < 0.0001) was decomposed in pairwise post hoc comparisons between adjacent angles (multiple comparisons, Bonferroni corrected). Starting from upright faces, there were no significant differences between adjacent angles (0°–30°: p = 0.98; 30°–60°: p = 0.98) until the 60°–90° comparison ( p = 0.0161), and then no further significant differences (90°–120°: p = 0.94; 120°–150°: p = 0.99; 150°–180°: p = 0.99). Thus, the results could be again divided in two groups of orientations (0°–30°–60° vs. 90°–120°–150°–180°: F(1,17) = 16,7, p = 0.0007) ( Figure 4). Polynomial contrasts showed a significant linear component in the data ( F(1,17) = 24.2, p < 0.0001) but also significant departures from linearity (quadratic component: F(1,17) = 9.6, p = 0.006; cubic component: F(1,17) = 0.13, p = 0.72 NS). In contrast to these observations, for different trials, there was no effect of orientation at all on the composite index (aligned–misaligned; F(1,17) = 0.22, p = 0.96) and thus no significant component ( ps > 0.5). However, the main effect of orientation described above reflected a linear component ( F(1,17) = 10.5, p = 0.0004) only (all other components: ps > 0.1). In sum, as for accuracy rates, the effect of orientation on “same” trials showed a non-linear relation with alignment, while for “different” trials, there was only a main linear effect of orientation ( Figure 3). 
In summary, the results show that inversion affects the matching (“same trials”) of a face half differently when it is aligned or misaligned with the other half ( Figure 3). This was previously reported by a few studies only (e.g., Goffaux & Rossion, 2006; Young et al., 1987) because other studies of the face composite effect compared either aligned to misaligned upright conditions or aligned upright to aligned inverted conditions (e.g., Hole, 1994; Hole et al., 1999). Most importantly, we observed strong non-linear relationship between orientation and the face composite effect, best illustrated on Figure 4. Up to 60° of rotation, the face composite illusion was extremely large in performance and RT and then dropped significantly at 90° and remained stable until 180°. Interestingly, holistic processing was not abolished completely for orientations beyond 90° (as observed previously for 180° faces, see Goffaux & Rossion, 2006), suggesting that facial parts still influence each other even when the stimulus is rotated at and beyond the horizontal plane. 
Discussion
We found almost no effects of inversion for misaligned stimuli requiring a “same” response. In contrast, when the two halves were aligned to form a whole face, there was a positive effect of inversion: It increased performance and decreased RTs ( Figure 3). This is clearly because the composite illusion was less strong for rotated stimuli at and over 90°. Thus, the very same transformation (picture-plane rotation) during the exact same task (to match the top part of the face) leads to two distinct effects, depending on whether the irrelevant bottom face half is aligned or misaligned with the target top half: This is undoubtedly a strong support for a qualitative view of the FIE. Importantly, this observation should not be taken as evidence that inversion does not affect at all the processing of misaligned faces, i.e., the top part of the face that can be processed in isolation. As observed for “different” trials, there was an increase of RT and decrease of performance with increasing orientation ( Figure 3). Thus, quite obviously, inversion does not affect only the processing of the whole face but is affecting also the processing of elements such as half of the face or even single features such as isolated eyes (see e.g., Leder et al., 2001; Nachson & Shechory, 2002; Rakover & Teucher, 1997), albeit to a lesser extent than the whole face (e.g., Bartlett, Searcy, & Abdi, 2003; Farah et al., 1995). 
Most importantly, we found that holistic processing of individual faces is suddenly disrupted between 60 and 90 degrees This non-linear relationship between orientation and the amount of composite for “same” trials offers a second strong support for the qualitative view of the FIE. This observation is roughly consistent with previous studies that used multiple angles of face rotations by asking viewers to judge the grotesqueness of “thatcherized faces” and reported deviations from linearity at orientations around 90° (e.g., Lewis, 2001; Murray et al., 2000; Sjoberg & Windes, 1992; Stürzel & Spillmann, 2002). Other experiments aimed at testing the effect of orientation on holistic face processing using tasks such as the categorical perception of faces in noise (McKone et al., 2001), the perception of a “mooney” face stimulus (McKone, 2004), or the matching of thatcherized faces (Edmonds & Lewis, 2007). However, while these studies generally reported departure from linearity,1 their data were not as clear-cut as the data reported here because they reported significant differences among adjacent angles of rotation (e.g., between 90° and 120°; see also Jacques & Rossion, 2007). Compared to these previous studies, the present data present several advantages to describe adequately the relationship between the angle of rotation of the face and the holistic face processing, which may account for the categorical effects observed here only. First, we used for the first time the face composite effect, an extremely simple and well-documented measure of holistic face processing in the literature. As illustrated on Figure 1, it is a strong visual illusion: One cannot prevent seeing the top parts of the faces as being slightly different; they literally fuse with the bottom part to form a whole face percept. In contrast, previous attempts to characterize the orientation function of holistic processing used methods that did not test directly the interdependence of facial features. Rather, these studies increased the difficulty of a face task across multiple orientations by adding noise (McKone et al., 2001), blurring the faces (Collishaw & Hole, 2002), or presenting the faces in the periphery (McKone, 2004). These methods have all in common the disruption of the diagnosticity of local details (including out of fovea presentation). However, they do not directly measure how much an individual face part influences the perception of another part, as in the face composite effect, which is how holistic processing has been operationalized in the face processing literature (Sergent, 1984; Tanaka & Farah, 1993; Young et al., 1987). Second, the participants of the present experiment were involved in discriminating faces at the individual level, not at perceiving the degree of grotesqueness of a face or simply perceiving a face stimulus as a face (Sjoberg & Windes, 1992; Stürzel & Spillman, 2002; Lewis, 2001; McKone, 2004; Murray et al., 2000, Experiment 2), tasks which may not tap into the encoding of individual of face representations. Thus, the question addressed here was truly how inversion affects holistic processes that contribute to perceiving individual representation of faces, not the categorization of a face stimulus as a face for instance. Third, these previous studies all used subjective judgment tasks (e.g., ratings or grotesqueness), which lead to a large inter-subject variance rather than a measure of performance and speed during individual face processing, as in the present experiment. Finally, unlike previous studies, we reported both RTs and error rates. This is interesting because the patterns for accuracy and RTs were slightly different (Figure 4): At 90°, the composite effect is still (non-significantly) larger than at 120° for accuracy, but the opposite is found for RTs, indicating a slight trade-off between the two measures at this angle (and thus an overall equally attenuated effect at these 2 angles). 
Holistic processing and the perception of metric distances between features
As indicated in the Introduction section, the pattern of results found here may shed light on the relationship between holistic processing and the processing of the metric distances between facial features (e.g., interocular distance), often referred to as “configural information” (Rhodes et al., 1987; for a conceptual distinction between the two notions, see Maurer et al., 2002; Rossion & Gauthier, 2002). Schwaninger and Mast (2005) tested their participants in an individual discrimination task on faces presented at multiple orientations while manipulating distances between facial features. They found a massive drop of performance and RT increase between 60° and 90° when metric distances between features are manipulated (see Figure 2 of Schwaninger & Mast, 2005), similar to our observations. This is interesting because it suggests that the effects observed in these authors' experiment may in fact reflect holistic processing. That is, when using full-face stimuli in an individual discrimination task, perceiving metric distances between features would tax holistic processing much more than perceiving local feature manipulations. This would simply be because two or more elements need to be considered together for the perception of metric distances, whereas considering one element is sufficient for the perception of local feature manipulations. Thus, even though these two notions of configuration (“holistic” vs. perception of metric distances) have been distinguished at the conceptual level (e.g., Maurer et al., 2002; Rossion & Gauthier, 2002), it may in fact be extremely difficult to dissociate the two empirically. A simpler account of the detrimental effect of face inversion on the processing of metric relations between features would thus be the following: Faces are processed both in terms of their local features and their integration into holistic representations, the latter process being particularly impaired from 90° rotations. Consequently, perception of metric distances between features is also particularly impaired over 90° rotation. This leads to the interesting prediction that the perception of vertical metric distances (e.g., eyes/nose distance), which is most sensitive to face inversion because it encompasses more than 2 elements across the whole face structure (Goffaux & Rossion, 2007), may show the most similar tuning function with orientation as the one observed here for the composite effect. 
On the nature of holistic face perception
How do these observations inform about the nature of face inversion and its effect on holistic face processing? As indicated in the introduction, a theoretical account of the face inversion effect postulates that the visual input has to be normalized (i.e., mentally rotated in the case of orientation manipulation) to produce perceptual outputs that could then be matched to representations stored in memory (Collishaw & Hole, 2002; Rock, 1973; Schwaninger & Mast, 2005; Valentine, 1988). However, for faces, besides the fact that such a normalization through mental rotation is unnecessary for recognition (Perrett, Oram, & Ashbridge, 1998), the strong non-linear relation between holistic processing and the angle of rotation observed here speaks directly against such a normalization mechanism. This view is simply unable to explain why there is a sudden drop of performance/RT increase at 90°, without any further effect of orientation. 
The functional locus of holistic face processing is at the perceptual level: The face composite effect reflects a visual illusion (illustrated in Figure 1) that takes place in high level visual areas such as the “fusiform face area” (Schiltz & Rossion, 2006). The present observations thus suggest that when an individual face is presented at a 90°–180° angle rotation, it can be handled by the early visual system similarly to an upright face (only the phase of the visual stimulus is different), but unlike this upright face it will not be perceived holistically in high visual areas. Interestingly, while a mental rotation account considers that perceptual processes are independent of internal (stored) representations, the visual stimuli being first transformed and then matched to internal representations (e.g., Jolicoeur, 1990; Valentine, 1988), the present observations can only be accounted for by postulating a direct role of internal representations derived from visual experience in the holistic face perception process. Indeed, in everyday life, people have experience with upright faces, tolerating a certain degree of deviation (faces can be tilted up to a certain degree), but there is almost no experience with orientations beyond a certain angle (i.e., no more experience with 90° than 180° faces). The observation of a drop of holistic perception at about 90° thus suggests a mechanism according to which an internal face representation derived from experience is applied to the incoming visual stimulus (familiar or unfamiliar face) not following its perceptual analysis but as it is perceptually processed. This representation is necessarily holistic, in the sense that it is a representation of the whole face structure, and its application is most efficient when the whole face stimulus is presented to the observer.2 According to this view, holistic face processing does not merely reflect the outcome of bottom up perceptual processes integrating information progressively and preceding the matching to an internal global representation. Rather, perceiving a face holistically appears to rely on an internal, experience-derived representation centered around 0° that helps integrating the features of the incoming stimulus. “Holistic” thus refers both to a process and a representation of the stimulus: Faces are processed holistically because of their holistic representation in the visual system. 
This proposal is derived from several sources, most notably Francis Galton's (1883) insightful observations on composite portraits, who already suggested such a perceptual mechanism (see Sergent, 1984; Young et al., 1987). By superimposing photographs of members of the same family, Galton noticed that he could create a prototypical face with which each member had a “family likeness,” independent of the specific shape of single components. According to Galton, the relationship between this prototypical template and any individual face could be perceived and determine the recognition of the face only if a simultaneous processing of the facial features would take place, so that the components were integrated to give rise to a global percept. This view, which can be also related to Goldstein and Chance's (1980) proposal of a face schema guiding perception, emphasizes the role of internal representations on holistic face perception. That is, the face cannot be perceived holistically before being associated to an internal representation or a typical face schema: It is through the integration with an internal representation that the individual stimulus is fully perceived. This schematic representation is derived from experience and has a certain degree of flexibility, being potentially applicable to stimuli deviating from the canonical upright orientation, but not beyond a certain extent (60° on average). In current theoretical models of face representation, this schematic representation may correspond to an average prototypical face derived from experience (e.g., Leopold, O'Toole, Vetter, & Blanz, 2001; Rhodes et al., 1987; Valentine, 1991) and which would be used to individualize the stimulus rapidly. 
Admittedly, this interpretation of our data will certainly need complementary and more direct evidence in future research. However, a framework according to which holistic perception of faces is highly dependent on visual experience and based on an average internal face representation is consistent with a number of observations in the face processing literature. For instance, holistic processing as evidenced by the face composite effect can be applied to faces presenting a different morphology, such as other-race faces, but to a lesser extent than for same-race faces (Michel et al., 2006). This indicates that holistic processing is both grossly defined—being able to handle faces with a different structure than the ones we are usually exposed to—and finely tuned by our visual experience. Moreover, there is neural evidence that faces in high level visual areas are represented holistically, both in the monkey infero-temporal cortex (see Logothetis & Sheinberg, 1996; Tanaka, 1996) and in the human fusiform gyrus (Schiltz & Rossion, 2006). Most interestingly, there is strong evidence that acquired prosopagnosia—the inability to recognize and discriminate individual faces following brain damage (Bodamer, 1947)—results from an inability to process faces holistically (e.g., Barton, Press, Keenan, & O'Connor, 2002; Sergent & Signoret, 1992). Interestingly, while some authors have distinguished between apperceptive and associative forms of prosopagnosia (e.g., De Renzi, 1986), this view has been challenged by others (Delvenne, Seron, Coyette, & Rossion, 2004; Farah, 1990), arguing that perceptual deficits were always present in so-called cases of associative (prosop)agnosia. An interpretation of these observations is that the loss of internal face representations following prosopagnosia would make the full perception of individual faces impossible, so that these “associative” prosopagnosic patients, despite normal vision, would no longer be able to perceive faces as integrated individual wholes. 
To summarize, we have shown that holistic face processing is fairly preserved up until 60 degrees of orientation of the face stimulus. This observation strongly supports the qualitative view of the FIE and offers a theoretical account of this effect under which perceptual bottom-up normalization mechanisms independent of experience would not be involved. Rather, individual faces would be generally handled through the application of an internal holistic representation centered on the upright orientation that helps to glue the features of the incoming visual face together, as observed in the face composite illusion. 
Acknowledgments
Bruno Rossion is supported by the Belgian National Foundation for Scientific Research (FNRS). We thank Adélaïde de Heering and Corentin Jacques for their advices in designing the experiment and the data analysis as well as Corentin Jacques and two anonymous reviewers for their helpful comments on a previous version of this manuscript. 
Commercial relationships: none. 
Corresponding author: Bruno Rossion. 
Email: bruno.rossion@uclouvain.be. 
Address: Unite Cognition et Developpement, Universite catholique de Louvain, 10, Place du Cardinal mercier, 1348 Louvain-la-Neuve, Belgium. 
Footnotes
Footnotes
1  Collishaw and Hole (2002) reported results apparently discordant with our findings, showing a linear relationship between the recognition of famous blurred faces and angles of rotation of the face. However, a careful look at the data and analyses reported in this study indicates that face recognition was equal for angles 0 to 45 degrees and perhaps until 67.5 degrees (not tested), in line with our observations, and then dropped significantly.
Footnotes
2  Note that this does not prevent this template matching mechanism to be applied to smaller regions of the face if the visual input is degraded or limited to smaller parts of the face (e.g. the eyes). Holistic processing is defined here as an integrative process that does not require the full visual stimulus to be present.
References
Bartlett, J. C. Searcy, J. (1993). Inversion and configuration of faces. Cognitive Psychology, 25, 281–316. [PubMed] [CrossRef] [PubMed]
Bartlett, J. C. Searcy, J. H. Abdi, H. (2003). Perception of faces, objects and scenes: Analytic and holistic processes. (pp. 21–52). Oxford: Oxford University Press.
Barton, J. J. Keenan, J. P. Bass, T. (2001). Discrimination of spatial relations and features in faces: Effects of inversion and viewing duration. British Journal of Psychology, 92, 527–549. [PubMed] [CrossRef]
Barton, J. J. Press, D. Z. Keenan, J. P. O'Connor, M. (2002). Lesions of the fusiform face area impair perception of facial configuration in prosopagnosia. Neurology, 58, 71–78. [PubMed] [CrossRef] [PubMed]
Bodamer, J. B. (1947). Die prosop-agnosie (Die Agnosie des Physiognomieerkennens. Archiv für Psychiatrie und Nervenkrankheiten, 179, 6–54. [CrossRef]
Bruyer, R. Galvez, C. Prairial, C. (1993). Effect of disorientation on visual analysis, familiarity decision and semantic decision on faces. British Journal of Psychology, 84, 433–441. [PubMed] [CrossRef] [PubMed]
Carey, S. Diamond, R. (1977). From piecemeal to configurational representation of faces. Science, 195, 312–314. [PubMed] [CrossRef] [PubMed]
Collishaw, S. M. Hole, G. J. (2000). Featural and configurational processes in the recognition of faces of different familiarity. Perception, 29, 893–909. [PubMed] [CrossRef] [PubMed]
Collishaw, S. M. Hole, G. J. (2002). Is there a linear or a nonlinear relationship between rotation and configural processing of faces? Perception, 31, 287–296. [PubMed] [CrossRef] [PubMed]
de Heering, A. Rossion, B. Turati, C. Simion, F. (2008). Holistic face processing can be independent of gaze behavior: Evidence from the face composite effect. Journal of Neuropsychology, 2, 183–195. [CrossRef] [PubMed]
Delvenne, J. F. Seron, X. Coyette, F. Rossion, B. (2004). Evidence for perceptual deficits in associative visual (prosopagnosia: A single-case study. Neuropsychologia, 42, 597–612. [PubMed] [CrossRef] [PubMed]
De Renzi, E. Ellis,, H. D. Jeeves,, M. A. Newcombe,, F. G. Young, A. (1986). Current issues on prosopagnosia. Aspects of face processing. (pp. 243–252). Dordrecht: Martinus Nijhoff.
Diamond, R. Carey, S. (1986). Why face are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. [PubMed] [CrossRef] [PubMed]
Edmonds, A. J. Lewis, M. B. (2007). The effect of rotation on configural encoding in a face-matching task. Perception, 36, 446–460. [PubMed] [CrossRef] [PubMed]
Farah, M. J. Tanaka, J. W. Drain, H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21, 628–634. [PubMed] [CrossRef] [PubMed]
Freire, A. Lee, K. Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29, 159–170. [PubMed] [CrossRef] [PubMed]
Galton, F. (1883). Inquiries into human faculty and its development. London: Macmillan.
Goffaux, V. Rossion, B. (2006). Faces are “spatial”-holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception and Performance, 32, 1023–1039. [PubMed] [CrossRef] [PubMed]
Goffaux, V. Rossion, B. (2007). Face inversion disproportionately impairs the perception of vertical but not horizontal relations between features. Journal of Experimental Psychology: Human Perception and Performance, 33, 995–1002. [PubMed] [CrossRef] [PubMed]
Goldstein, A. G. Chance, J. E. (1980). Memory for faces and schema theory. Journal of Psychology, 105, 47–59. [CrossRef]
Hillger, L. A. Koenig, O. (1991). Separable Mechanisms in face processing: Evidence from hemispheric specialization. Journal of Cognitive Neuroscience, 3, 42–58. [CrossRef] [PubMed]
Hole, G. J. (1994). Configural factors in the perception of unfamiliar faces. Perception, 23, 65–74. [CrossRef] [PubMed]
Hole, G. J. George, P. A. Dunsmore, V. (1999). Evidence for holistic processing of faces viewed as photographic negatives. Perception, 28, 341–359. [PubMed] [CrossRef] [PubMed]
Jacques, C. Rossion, B. (2007). Early electrophysiological responses to multiple face orientations correlate with individual discrimination performance in humans. Neuroimage, 36, 863–876. [PubMed] [CrossRef] [PubMed]
Jolicoeur, P. (1990). Identification of disoriented objects: A dual-systems approach. Mind and Language, 5, 387–410. [CrossRef]
Jolicoeur, P. Regehr, S. Smith, L. B. J. Smith, G. N. (1985). Mental rotation of representations of two-dimensional and three-dimensional objects. Canadian Journal of Psychology, 39, 100–129. [CrossRef]
Lawson, R. (1999). Achieving visual object constancy across plane rotation and depth rotation. Acta Psychologica, 102, 221–245. [PubMed] [CrossRef] [PubMed]
Leder, H. Bruce, V. (2000). When inverted faces are recognized: The role of configural information in face recognition. Quarterly Journal of Experimental Psychology A, 53, 513–536. [PubMed] [CrossRef]
Leder, H. Candrian, G. Huber, O. Bruce, V. (2001). Configural features in the context of upright and inverted faces. Perception, 30, 73–83. [PubMed] [CrossRef] [PubMed]
Le Grand, R. Mondloch, C. J. Maurer, D. Brent, H. P. (2001). Neuroperception Early visual experience and face processing. Nature, 410, 890. [CrossRef] [PubMed]
Le Grand, R. Mondloch, C. J. Maurer, D. Brent, H. P. (2004). Impairment in holistic face processing following early visual deprivation. Psychological Science, 15, 762–768. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. O'Toole, A. J. Vetter, T. Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [PubMed] [Article] [CrossRef] [PubMed]
Logothetis, N. K. Sheinberg, D. L. (1996). Visual object recognition. Annual Review of Neuroscience, 19, 577–621. [PubMed] [CrossRef] [PubMed]
Maurer, D. Grand, R. L. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
McKone, E. (2004). Isolating the special component of face recognition: Peripheral identification and a Mooney face. Journal of Experimental Psychology: Learning Memory and Cognition, 30, 181–197. [PubMed] [CrossRef]
McKone, E. Martini, P. Nakayama, K. (2001). Journal of Experimental Psychology: Human Perception and Performance, 27, 573–599. [PubMed] [CrossRef] [PubMed]
Michel, C. Rossion, B. Han, J. Chung, C. S. Caldara, R. (2006). Holistic processing is finely tuned for faces of one's own race. Psychological Science, 17, 608–615. [PubMed] [CrossRef] [PubMed]
Murray, J. E. Yong, E. Rhodes, G. (2000). Revisiting the perception of upside-down faces. Psychological Science, 11, 492–496. [PubMed] [CrossRef] [PubMed]
Nachson, I. Shechory, M. (2002). Effect of inversion on the recognition of external and internal facial features. Acta Psychologica, 109, 227–238. [PubMed] [CrossRef] [PubMed]
Perrett, D. I. Oram, M. W. Ashbridge, E. (1998). Evidence accumulation in cell populations responsive to faces: An account of generalisation of recognition without mental transformations. Cognition, 67, 111–145. [PubMed] [CrossRef] [PubMed]
Rakover, S. S. Teucher, B. (1997). Facial inversion effects: Parts and whole relationship. Perception & Psychophysics, 59, 752–761. [PubMed] [CrossRef] [PubMed]
Rhodes, G. Brake, S. Atkinson, A. P. (1993). What's lost in inverted faces? Cognition, 47, 25–57. [PubMed] [CrossRef] [PubMed]
Rhodes, G. Brennan, S. Carey, S. (1987). Identification and ratings of caricatures: Implications for mental representations of faces. Cognitive Psychology, 19, 473–497. [PubMed] [CrossRef] [PubMed]
Riesenhuber, M. Jarudi, I. Gilad, S. Sinha, P. (2004). Face processing in humans is compatible with a simple shape-based model of vision. Proceedings of the Royal Society B: Biological Sciences, 271, S448–S450. [PubMed] [Article] [CrossRef]
Robbins, R. McKone, E. (2007). No face-like processing for objects-of-expertise in three behavioral tasks. Cognition, 103, 34–79. [PubMed] [CrossRef] [PubMed]
Rock, I. (1973). Orientation and form. New York: Academic Press.
Rossion, B. Gauthier, I. (2002). How does the brain process upright and inverted faces? Behavioral and Cognitive Neuroscience Reviews, 1, 63–75. [PubMed] [CrossRef] [PubMed]
Schiltz, C. Rossion, B. (2006). Faces are represented holistically in the human occipito-temporal cortex. Neuroimage, 32, 1385–1394. [PubMed] [CrossRef] [PubMed]
Schwaninger, A. Mast, F. W. (2005). The face-inversion effect can be explained by the capacity limitations of an orientation normalization mechanism. Japanese Psychological Research, 47, 216–222. [CrossRef]
Sekuler, A. B. Gaspar, C. M. Gold, J. M. Bennett, P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14, 391–396. [PubMed] [Article] [CrossRef] [PubMed]
Sergent, J. (1984). An investigation into component and configural processes underlying facial perception. British Journal of Psychology, 75, 221–242. [PubMed] [CrossRef] [PubMed]
Sergent, J. Signoret, J. L. (1992). Varieties of functional deficits in prosopagnosia. Cerebral Cortex, 2, 375–388. [PubMed] [CrossRef] [PubMed]
Stürzel, F. Spillmann, L. (2000). Thatcher illusion: Dependence on angle rotation. Perception, 29, 937–942. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology A, 46, 225–245. [PubMed] [CrossRef]
Tanaka, J. W. Sengco, J. (1997). Features and their configuration in face recognition. Memory & Cognition, 25, 583–592. [PubMed] [CrossRef] [PubMed]
Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19, 109–139. [PubMed] [CrossRef] [PubMed]
Tarr, M. J. (1995). Rotating objects to recognize them: A case study of the role of viewpoint dependency in the recognition of three-dimensional objects. Psychonomic Bulletin & Review, 2, 55–82. [CrossRef] [PubMed]
Tarr, M. J. Pinker, S. (1989). Mental rotation and orientation-dependence in shape recognition. Cognitive Psychology, 21, 233–282. [PubMed] [CrossRef] [PubMed]
Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483–484. [PubMed] [CrossRef] [PubMed]
Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [PubMed] [CrossRef] [PubMed]
Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology A, 43, 161–204. [PubMed] [CrossRef]
Valentine, T. Bruce, V. (1988). Mental rotation of faces. Memory & Cognition, 16, 556–566. [PubMed] [CrossRef] [PubMed]
Williams, C. C. Henderson, J. M. (2007). The face inversion effect is not a consequence of aberrant eye movements. Memory & Cognition, 35, 1977–1985. [CrossRef] [PubMed]
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Young, A. W. Hellawell, D. Hay, D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. [PubMed] [CrossRef] [PubMed]
Yovel, G. Kanwisher, N. (2004). Face perception: Domain specific, not process specific. Neuron, 44, 889–898. [PubMed] [Article] [PubMed]
Endo, M. Masame, K. Maruyama, K. (1989). Interference from configuration of a schematic face onto the recognition of its constituent parts. Tohoku Psychologica Folia, 48, 97–106.
Farah, M. J. (1990). Visual Agnosia: Disorders of object recognition and what they tell us about normal vision. Cambridge: MIT Press.
Leder, H. Bruce, V. (1998). Local and relational aspects of face distinctiveness. Quarterly Journal of Experimental Psychology A, 51, 449–473. [PubMed] [CrossRef]
Lewis, M. B. (2001). The Lady's not for turning: Rotation of the Thatcher illusion. Perception, 30, 769–774. [PubMed] [CrossRef] [PubMed]
Rhodes, G. (1988). Looking at faces: First‐order and second‐order features as determinants of facial appearance. Perception, 17, 43–63. [PubMed] [CrossRef] [PubMed]
Sjoberg, W. Windes, J. (1992). Recognition times for rotated normal and “Thatcher” faces. Perceptual and Motor Skills, 75, 1176–1178. [PubMed] [PubMed]
Figure 1
 
The face composite illusion. (Above) All top halves (above the white line) are strictly identical, but when they are aligned with distinct bottom parts (top row), they appear as slightly different. This illusion reflects the perception of the face stimulus as an integrated whole. When the two halves of the face are misaligned (bottom row), the illusion vanishes.
Figure 1
 
The face composite illusion. (Above) All top halves (above the white line) are strictly identical, but when they are aligned with distinct bottom parts (top row), they appear as slightly different. This illusion reflects the perception of the face stimulus as an integrated whole. When the two halves of the face are misaligned (bottom row), the illusion vanishes.
Figure 2
 
(A) Experimental design. Subjects had to concentrate on the top part of the two faces presented sequentially and decide as accurately and fast as possible whether they were identical or not. Top and bottom parts were either aligned for the two faces of a pair or misaligned. (B) Illustration of the illusion at different angles of rotation. Only angles ranging from 180° to 0° were used in the experiment (see procedure section).
Figure 2
 
(A) Experimental design. Subjects had to concentrate on the top part of the two faces presented sequentially and decide as accurately and fast as possible whether they were identical or not. Top and bottom parts were either aligned for the two faces of a pair or misaligned. (B) Illustration of the illusion at different angles of rotation. Only angles ranging from 180° to 0° were used in the experiment (see procedure section).
Figure 3
 
Results for all the conditions of the experiment. (A) Accuracy rates; (B) correct RTs. For both measures, it is clear that alignment does not affect the trials for which a “different” response is expected: The data shows an almost perfect superimposition of the slopes with angles of rotation for “different” trials, with general decreases of performance and increases of RTs with angles of rotation. The interesting observations are made for “same” trials, for which the composite effect (difference between aligned and misaligned conditions) is maximal at orientations 0° to 60°, then sharply decreases at 90° to remain stable until the 180° orientation. Standard error bars are not included in the graph for sake of clarity but SE values are provided in Tables 1a and 1b.
Figure 3
 
Results for all the conditions of the experiment. (A) Accuracy rates; (B) correct RTs. For both measures, it is clear that alignment does not affect the trials for which a “different” response is expected: The data shows an almost perfect superimposition of the slopes with angles of rotation for “different” trials, with general decreases of performance and increases of RTs with angles of rotation. The interesting observations are made for “same” trials, for which the composite effect (difference between aligned and misaligned conditions) is maximal at orientations 0° to 60°, then sharply decreases at 90° to remain stable until the 180° orientation. Standard error bars are not included in the graph for sake of clarity but SE values are provided in Tables 1a and 1b.
Figure 4
 
Amount of composite effect computed for accuracy rates and correct RTs. This display shows that the face composite effect was non-linearly related to the angles of rotation of the face stimulus, showing a massive drop of holistic processing between 60° and 90° of orientation.
Figure 4
 
Amount of composite effect computed for accuracy rates and correct RTs. This display shows that the face composite effect was non-linearly related to the angles of rotation of the face stimulus, showing a massive drop of holistic processing between 60° and 90° of orientation.
Table 1a
 
Accuracy rates (±SE) for the different conditions of the experiment.
Table 1a
 
Accuracy rates (±SE) for the different conditions of the experiment.
Accuracy rates (%) Angles 30° 60° 90° 120° 150° 180°
Same trials Aligned 0.67 ± 0.03 0.66 ± 0.03 0.69 ± 0.03 0.73 ± 0.02 0.77 ± 0.02 0.79 ± 0.02 0.76 ± 0.03
Misaligned 0.89 ± 0.03 0.91 ± 0.03 0.91 ± 0.03 0.87 ± 0.03 0.87 ± 0.03 0.90 ± 0.03 0.92 ± 0.03
Composite effect (misaligned– aligned, %) 0.21 0.25 0.21 0.14 0.10 0.12 0.16
Different trials Aligned 0.94 ± 0.02 0.95 ± 0.02 0.89 ± 0.04 0.88 ± 0.04 0.82 ± 0.04 0.83 ± 0.04 0.75 ± 0.04
Misaligned 0.89 ± 0.04 0.90 ± 0.03 0.88 ± 0.04 0.86 ± 0.04 0.83 ± 0.04 0.81 ± 0.05 0.74 ± 0.05
Table 1b
 
Correct response times (ms) for the different conditions of the experiment.
Table 1b
 
Correct response times (ms) for the different conditions of the experiment.
Correct RTs (ms) Angles 30° 60° 90° 120° 150° 180°
Same trials Aligned 681 ± 37 676 ± 38 642 ± 39 611 ± 37 589 ± 33 573 ± 30 569 ± 28
Misaligned 558 ± 27 565 ± 30 555 ± 25 583 ± 36 546 ± 26 526 ± 25 540 ± 23
Composite effect (misaligned– aligned, RTs) 123 110 87 28 42 47 29
Different trials Aligned 592 ± 27 634 ± 32 622 ± 25 619 ± 27 634 ± 23 628 ± 23 650 ± 32
Misaligned 588 ± 26 626 ± 27 639 ± 27 623 ± 25 636 ± 23 623 ± 24 652 ± 35
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×