Free
Research Article  |   November 2010
Solving the upside-down puzzle: Why do upright and inverted face aftereffects look alike?
Author Affiliations
Journal of Vision November 2010, Vol.10, 1. doi:https://doi.org/10.1167/10.13.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tirta Susilo, Elinor McKone, Mark Edwards; Solving the upside-down puzzle: Why do upright and inverted face aftereffects look alike?. Journal of Vision 2010;10(13):1. https://doi.org/10.1167/10.13.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Face aftereffects for upright faces have been widely assumed to derive from face space and to provide useful information about its properties. Yet remarkably similar aftereffects have consistently been reported for inverted faces, a problematic finding because other paradigms argue that inverted faces are processed by different mechanisms from upright faces. Here, we identify a qualitative difference between upright and inverted face aftereffects. Using eye-height aftereffects, we tested for opponent versus multichannel coding of face dimensions by manipulating distance of the adaptor from the average, and face-specific versus shape-generic contributions via transfer of aftereffects between faces and simple T-shapes. Our results argue that (i) inverted face aftereffects derive entirely from shape-generic mechanisms, (ii) upright face aftereffects derive partly from shape-generic mechanisms but also have a substantial face space component, and (iii) both face-specific and shape-generic multidimensional spaces use opponent coding.

Introduction
Adaptation aftereffects for distortions of face shape (e.g., Leopold, O'Toole, Vetter, & Blanz, 2001; Webster & MacLin, 1999) are usually explained in terms of a shift of the perceived average face within face space, a multidimensional space that supports the recognition and discrimination of individual faces (Valentine, 1991). Correspondingly, researchers have used face aftereffects to study various theoretical properties of face space (e.g., Rhodes & Jeffery, 2006; Robbins, McKone, & Edwards, 2007; Susilo, McKone, & Edwards, 2010) and to address questions of broad interest such as whether face space structure in typical adults is matched by that in children, in Autism Spectrum Disorder, and in developmental prosopagnosia (Hills, Holland, & Lewis, 2010; Jeffery et al., 2010; Nishimura, Doyle, Humphreys, & Behrmann, 2010; Pellicano, Jeffery, Burr, & Rhodes, 2007). All these studies share an implicit assumption that face aftereffects at least partly tap high-level representations that are specific to face structure. This is because, by definition, face space is face-specific: face space dimensions are stated to be attributes that distinguish individual faces (Valentine, 1991), not attributes that distinguish faces from chairs, or attributes that distinguish individual chairs as well as individual faces. Thus, the idea that face aftereffects derive from, and provide useful information about, face space implies that face aftereffects should be in some way face-specific. However, is this true? 
A classic comparison stimulus used to test for face specificity is inverted faces. Many other methodologies demonstrate that, despite the use of physically identical faces in both orientations, inverted faces are processed in a qualitatively different way from upright faces: these include behavioral paradigms that assess holistic/configural processing, double-dissociation studies in neuropsychology, and functional imaging dissociation of regions most responsive to upright and inverted faces (e.g., Aguirre, Singh, & D'Esposito, 1999; Behrmann, Avidan, Marotta, & Kimchi, 2005; Duchaine, Yovel, Butterworth, & Nakayama, 2006; Epstein, Higgins, Parker, Aguirre, & Cooperman, 2005; Haxby et al., 1999; McKone, Martini, & Nakayama, 2001; Moscovitch, Winocur, & Behrmann, 1997; Schiltz & Rossion, 2006; Tanaka & Farah, 1993; Young, Hellawell, & Hay, 1987; Yovel & Kanwisher, 2005). These findings predict that aftereffects for inverted faces should also be in some way qualitatively different from those for upright faces. 
Surprisingly, in studies to date, upright and inverted face aftereffects have been remarkably similar. All manipulations known to produce aftereffects for upright faces have, where tested, also been shown to produce aftereffects for inverted faces; these include global expansion–contraction (Rhodes et al., 2004), vertical/horizontal expansion–contraction (Watson & Clifford, 2003; Webster & MacLin, 1999; Zhao & Chubb, 2001), gender (Rhodes et al., 2004; Watson & Clifford, 2006), eye height (Robbins et al., 2007), and individual identity (Leopold et al., 2001; Rhodes, Evangelista, & Jeffery, 2009). Further, the size of inverted aftereffects is substantial, often as large as that of upright (Robbins et al., 2007; Watson & Clifford, 2003; Webster & MacLin, 1999), and at times even larger (Rhodes et al., 2004; Watson & Clifford, 2006; although see Rhodes et al., 2009). The only result that might be considered, at first glance, to be evidence of a qualitative difference between upright and inverted face aftereffects is the finding that the aftereffects derive from partially separable sets of neurons (i.e., transfer of aftereffects between upright and inverted is less than 100%, and it is possible to induce simultaneous opposite aftereffects to upright and inverted faces; Guo, Oruc, & Barton, 2009; Robbins et al., 2007; Watson & Clifford, 2003, 2006; Webster & MacLin, 1999; Rhodes et al., 2004). However, this result does not demonstrate a qualitative difference because even upright faces are not all coded by one common set of neurons (e.g., see the “Jennifer Aniston neuron,” Quiroga, Reddy, Kreiman, Koch, & Fried, 2005; and simultaneous opposite aftereffects for gender, race, and individual identity in upright faces, Jaquet, Rhodes, & Hayward, 2007; Little, DeBruine, & Jones, 2005; Robbins & Heck, 2009; Yamashita, Hardy, DeValois, & Webster, 2005). 
The present study aims to solve the puzzle of inverted face aftereffects. We seek to address the interrelated questions of (i) whether there is any qualitative difference between upright and inverted face aftereffects, (ii) why inverted face aftereffects have looked so similar to upright face aftereffects in previous studies, and (iii) whether the implicit assumption that upright face aftereffects tap face-specific face space is valid. We approach these questions by testing two ideas that could potentially provide evidence of a qualitative difference between upright and inverted face aftereffects. 
First, we test whether upright and inverted aftereffects might rely on different strategies for coding variation along dimensions within multidimensional space. We contrasted opponent versus multichannel coding models. For upright faces, it is well established that shape aftereffects reflect opponent coding (Rhodes & Jeffery, 2006; Robbins et al., 2007; Susilo et al., 2010). Here we test coding strategy for shape information in inverted faces, noting that it is a priori possible that that this could be multichannel rather than opponent, given that at least some types of complex object information uses multichannel coding (eye gaze direction, Calder, Jenkins, Cassel, & Clifford, 2008; Jenkins, Beaver, & Calder, 2006; 3D viewpoint of faces, bodies, and other stimuli; Fang & He, 2005; Lawson, Clifford, & Calder, 2009). 
Second, we examine whether upright and inverted aftereffects might be generated by different stages of the visual system. It is known that low-level vision is not the sole origin of either upright or inverted face aftereffects, since they survive retinotopic changes of size, position, orientation, and individual identity of the adaptor and test faces (Anderson & Wilson, 2005; Leopold et al., 2001; Rhodes et al., 2004; Watson & Clifford, 2003; Yamashita et al., 2005; Zhao & Chubb, 2001). However, there is an open question regarding the extent to which, within mid- and/or high-level vision, upright face aftereffects originate from representations specific to faces and the extent to which inverted face aftereffects arise from the same representations. Several authors have noted that a single system supporting both upright and inverted face aftereffects can explain current adaptation findings—including findings of asymmetric transfer of aftereffects between orientations (i.e., upright-to-inverted is larger than inverted-to-upright, Guo et al., 2009; Robbins et al., 2007; Watson & Clifford, 2003, 2006; Webster & MacLin, 1999)—by including assumptions either that face space neurons are orientation-selective for upright faces, or that neurons responsive to inverted faces are more broadly tuned than those responsive to upright faces (Guo et al., 2009; Watson & Clifford, 2003, 2006). However, it is also possible that upright and inverted aftereffects arise from different systems. Watson and Clifford (2003) suggested it could be that upright face adaptors tap both a holistic face-specific system and a part-based object-generic system, while inverted face adaptors tap only the latter. A related option is that inverted face aftereffects might arise from a generic “shape space” rather than from face space, a possibility suggested by findings that monkeys have both mid- and high-level areas coding basic shape properties (e.g., aspect ratio and convexity–concavity, Kayaert, Biederman, Op de Beeck, & Vogels, 2005; Pasupathy & Connor, 2001), that humans show aftereffects for distortions of these properties (Regan & Hamstra, 1992; Suzuki, 2005), and that a general theoretical possibility is that face aftereffects (both upright and inverted) could arise solely or partially from mid-level vision (Rhodes & Leopold, in press). Here we test directly for origins within different parts of the visual system by examining transfer of aftereffects between faces and non-face shapes, separately for upright and inverted faces. 
Key to our study design is that the type of facial manipulation we selected was eye height (see Figure 1). Eye height was selected partly because previous studies confirm that coding of this facial attribute in upright faces is opponent (Robbins et al., 2007; Susilo et al., 2010) and that eye height produces the usual strong face inversion effect (i.e., observers detect eye-height changes more poorly in inverted faces than in upright faces; Sekunova & Barton, 2008; Goffaux & Rossion, 2007; Susilo et al., 2010). However, the primary reason for selecting eye height was to address our second research question regarding transfer. To fully capture the potential adaptation transfer, we needed a physically identical manipulation type, which applies to both face and non-face stimuli. Unlike many other types of facial distortions, eye height has a single simple shape manipulation to which transfer can be tested, namely length of the vertical bar in a T-shape. The only alteration to an eye-height manipulated face is essentially a change in the proportions of the internal “T” structure of the eyes–nose–mouth region. Since this alteration can be neatly captured in a non-face stimulus by moving the horizontal bar of a T up and down, we can reasonably make the following predictions. If a face aftereffect has a purely shape-generic origin, then we should observe full transfer of adaptation to a T-shape. A prediction of this nature cannot be made for more complex facial manipulations (e.g., race, identity), because no one particular type of manipulation to a basic shape test stimulus can fully capture the shape changes present in the face. This means that, for complex manipulations, even a purely shape-generic origin of inverted face aftereffects would predict only partial transfer to any one particular type of simple-shape test stimulus, thus failing to discriminate between face-specific and shape-generic origins.
Figure 1
 
Stimulus examples. (A) The four test individuals (left) and the four adaptor individuals (right). (B) Overlaid faces and Ts at normal (+0 pixel) and adapted (+50 pixels) positions. (C) Sample test values for both faces and Ts.
Figure 1
 
Stimulus examples. (A) The four test individuals (left) and the four adaptor individuals (right). (B) Overlaid faces and Ts at normal (+0 pixel) and adapted (+50 pixels) positions. (C) Sample test values for both faces and Ts.
 
Our three experiments proceed as follows. In Experiment 1, we use face aftereffects to test opponent and multichannel models of upright and inverted face aftereffects. In Experiment 2, we test aftereffect transfer between faces and T-shapes, to examine whether upright and inverted aftereffects originate in different parts of the visual system. In Experiment 3, we integrate the results of the first two experiments by testing whether T aftereffects derive from opponent or multichannel coding. 
Experiment 1: Comparing opponent and multichannel models for upright and inverted aftereffects
Experiment 1 tests whether inverted face aftereffects derive from opponent or multichannel coding (see Figure 2). Both opponent and multichannel models can explain the existence of adaptation aftereffects. Under most circumstances (the exception being where the adaptor is the average face in the opponent model), adaptation will reduce the strength of one pool more than the other/s, leading to shifts in the total population response and thus in the face perceived as most normal.
Figure 2
 
Coding models for face aftereffects. (A) In an opponent model, each value on a trajectory through face space is coded by the relative activation of two monotonically tuned neural populations that show maximum response to opposite ends of the dimension. After adaptation to an eyes-up adaptor, the stronger reduction of the high-eyes pool than the low-eyes one will shift the crossover point to the right and also cause the initial average eye height to be perceived as lower than before. (B) In a multichannel model, each eye-height value is coded by the relative output of bell-shaped tuned neural populations representing that particular value. Adapting to eye height, X will affect only populations that code X, in proportion to their initial response rate. If X is an eyes-up adaptor that is sufficiently close to the average, it will drive some of the populations that code the average eye height. As a result, the initial eye height will be perceived as lower than before.
Figure 2
 
Coding models for face aftereffects. (A) In an opponent model, each value on a trajectory through face space is coded by the relative activation of two monotonically tuned neural populations that show maximum response to opposite ends of the dimension. After adaptation to an eyes-up adaptor, the stronger reduction of the high-eyes pool than the low-eyes one will shift the crossover point to the right and also cause the initial average eye height to be perceived as lower than before. (B) In a multichannel model, each eye-height value is coded by the relative output of bell-shaped tuned neural populations representing that particular value. Adapting to eye height, X will affect only populations that code X, in proportion to their initial response rate. If X is an eyes-up adaptor that is sufficiently close to the average, it will drive some of the populations that code the average eye height. As a result, the initial eye height will be perceived as lower than before.
 
For upright faces, the coding strategy is opponent. This has been demonstrated using direct measurement of the shape of tuning functions in monkey face-selective neurons (Freiwald, Tsao, & Livingstone, 2009; Leopold, Bondar, & Giese, 2006), effects of opposite versus non-opposite adaptors relative to the average face (Anderson & Wilson, 2005; Leopold et al., 2001; Rhodes & Jeffery, 2006), testing the prediction that adapting to the average face does not shift perception of non-average faces (Leopold et al., 2001; Webster & MacLin, 1999), and finally, using the technique we employ in the present study, namely comparing the size of aftereffects as a function of multiple adaptor positions. 
An opponent model predicts that an adaptor far from the average face will produce larger aftereffects than a near adaptor (Figure 3A). This is because the far adaptor will drive one of the pools much more strongly than the other, thus producing response reduction that is strongly asymmetric, leading to a bigger shift of the crossover point than will a near adaptor. The opponent model further predicts that the trend of increasing aftereffects with increasing adaptor position will occur across the full range of possible eye heights. Thus, it is important to note that: (a) our adaptors were positioned to cover this full range, starting from a close-to-average value of “+5 pixels” and extending up to an extreme value of “+50 pixels” beyond which the eyes start to cross the hairline and so break the basic face configuration; and (b) for upright faces, our previous studies have confirmed that, using exactly the same eye-height manipulation as we use here, the increasing trend does indeed continue across the full range (including testing 7 different positions between +5 and +50 pixels in Susilo et al., 2010; also see Robbins et al., 2007).
Figure 3
 
The predictions of opponent and multichannel models in Experiments 1 and 3. (A) In an opponent model, the size of the aftereffect increases as the adaptor moves away from the average. (B) In a multichannel model, depending on the amount of overlap between channels, the distance between the peak channel sensitivities, and the location of our three adaptor values relative to the channel peaks, the size of the aftereffect either decreases (middle panel) or increases then decreases (right panel).
Figure 3
 
The predictions of opponent and multichannel models in Experiments 1 and 3. (A) In an opponent model, the size of the aftereffect increases as the adaptor moves away from the average. (B) In a multichannel model, depending on the amount of overlap between channels, the distance between the peak channel sensitivities, and the location of our three adaptor values relative to the channel peaks, the size of the aftereffect either decreases (middle panel) or increases then decreases (right panel).
 
The predictions of the multichannel model are more complex (Figure 3B). In this model, shifts in perception of the average face following adaptation occur to the extent that the adaptor activates the same channel/s responsive to the average face. Depending on the breadth of tuning within each channel, the spacing of the peak sensitivities of the channels, and the positioning of our three adaptor values (+5, +20, and +50 pixels) relative to these peak sensitivities, the specific predictions could be of either a consistent decrease in aftereffect size across our +5, +20, +50 set of adaptors, or possibly a peaked pattern with +50 still producing at most a weak aftereffect but +5 also producing a weaker aftereffect than +20 (cf. similar decline for adaptors positioned very close to the test value in the tilt aftereffect, see, for example, Clifford, Wenderoth, & Spehar, 2000). Importantly, a multichannel model could not predict either a large aftereffect for our extreme adaptor value of +50, nor aftereffects increasing across the full range of possible eye-height values, except under the nonsensical assumption that all channels beyond the first had peak sensitivities to eye heights that fall outside the head. 
Given that our previous studies have demonstrated the opponent coding pattern (Figure 3A) across our +5, +20, and +50 pixel adaptors, we used these same positions to examine aftereffects for inverted faces. 1 We compared the size of the aftereffects following adaptation to each of the three different adaptors, with the adaptor and the test stimuli always in the same orientation. If aftereffects for inverted faces, like those for upright faces, derive from an opponent coding strategy, then we predict larger aftereffects for more extreme adaptor positions; in contrast, if inverted face aftereffects derive from multichannel coding, then we predict either smaller aftereffects for more extreme adaptor positions or an inverted U-shaped function relating aftereffect size and adaptor position from the average. 
Methods
Participants
Sixty Caucasian undergraduates (age range: 17–28, 41 females) of the Australian National University received credit for a first year psychology course or were paid $12 for the 50- to 60-min experiment. All reported normal or corrected-to-normal vision. 
Design
The experiment was a three (adaptor position: +5, +20, +50) by two (orientation: upright, inverted) between-subjects design. Subjects were randomly assigned to one of the six conditions (N = 10 per condition). Adaptor face differed from test faces in both size and identity to remove potential low-level retinotopic contributions to the aftereffects. 
Stimuli
Stimuli were created from grayscale photographs of 9 Caucasian faces (front view, neutral expression: 7 individuals from the Stirling PICS database (http://pics.psych.stir.ac.uk/) and 2 from the Harvard Face Database (F. Tong and K. Nakayama)). The internal features (in their exact configurations) of eight of the individuals were pasted into a common background head, selected because of his clearly visible hairline. Four of the resulting “people” were used as adaptor faces (also previously used in Susilo et al., 2010), and the other four as test faces (also previously used in McKone, Aitkin, & Edwards, 2005; Robbins et al., 2007). 
Eye heights were shifted up (+) or down (−) using Adobe Photoshop CS2. A “pixel” of shift was defined in reference to a stimulus image sized 370 (vertical) × 310 (horizontal) pixels. One pixel corresponded to 0.29% of full head height (i.e., top of head to chin) and was equivalent to 0.03° at the 40-cm viewing distance. The eyes of the adaptors were shifted up in three positions (+5, +20, and +50 pixels). The eyes of test faces were shifted up and down in 29 deviation levels (0, ±1, ±2, ±3, ±4, ±5, ±6, ±7, ±8, ±9, ±10, ±12, ±14, ±18, and ±24 pixels). 
Face stimuli were presented using PsyScope software (Cohen, MacWhinney, Flatt, & Provost, 1993) on a CRT screen iMac computer (36-cm screen, 1024 × 768 resolution). Subjects used a chin rest. For presentation, adaptor faces were resized to 227 × 190 pixels (viewing angle of 7.9° vertical by 5.7° horizontal) and test faces to 298 × 250 pixels (10° vertical by 7.9° horizontal). 
Procedure
Subjects were instructed to judge eye height based on comparison with their imagined average eye height of real-world faces. Half the subjects responded “too high” via button “z,” and “too low” via keypad “3”; this key assignment was reversed for the other half. There were ten practice trials with the general procedure using a non-relevant manipulation (eyes further apart or closer together). 
In the baseline phase, each trial comprised: test face for 250 ms; the question “Were the eyes too high or too low?” until subjects responded; and 400-ms blank screen before the next trial. In the adapted phase, each trial comprised: adaptor for 4000 ms; blank screen for 400 ms; and the test face with procedure identical to the baseline phase. In each phase (348 trials), each deviation level of each of the four test individuals was presented three times, in different random order for each subject, divided into three blocks of 124 trials (each contained one presentation per deviation level of the four test individuals). There were short breaks between blocks. Collapsing across the four test individuals, responses at each deviation level for each phase were based on 12 trials per subject. 
Psychometric curve fitting and calculation of aftereffect size
Preliminary data analysis followed the same procedure in all three experiments. For each subject, proportion of “high” responses was plotted against physical deviation level, and the eye height perceived to be most normal before and after adaptation was determined by the point of subjective equality (PSE), i.e., the physical eye height that corresponded to 50% “too high” responses. The PSE was determined from psychometric curves fitted using the logistic function in psignifit version 2.5.6 (http://bootstrap-software.org/psignifit) in MATLAB (Wichmann & Hill, 2001). Aftereffect size for each subject was calculated by subtracting baseline PSE from adapted PSE. The adapted PSE should move toward the adaptor: for example, an eyes-up adaptor should cause physically eyes-up faces to be perceived as more normal than they were before adaptation. Thus, positive PSE shift scores indicate a shift in the direction corresponding to an aftereffect, whereas negative PSE shift scores indicate a change in the wrong direction for an aftereffect. 
Results
The mean fit R 2 was 0.90 (range 0.63–0.98) over the 120 psychometric curves (60 subjects each with separate curves for baseline and adapted). Figure 4 shows aftereffect results. For inverted faces, aftereffect size increased as a function of adaptor position, supporting the opponent model rather than the multichannel model. One-sample, two-tailed t-tests were conducted to compare each aftereffect to zero. This revealed that aftereffects were not significant following adaptation to +5, t(10) = 0.13, p = 0.89, but were significant for +20, t(10) = 2.31, p < 0.05, and +50, t(10) = 8.52, p < 0.001. This pattern of larger aftereffect size with increasing distance of the adaptor from the average was confirmed in two additional analyses. First, for the means plotted in Figure 4A, trend analysis revealed an increasing linear trend across the +5, +20, and +50 conditions, F(1, 29) = 16.22, MSE = 95.62, p < 0.001. Second, as shown in Figure 4B, there was a positive correlation, r(58) = 0.60, p < 0.001, between aftereffect size and a baseline-adjusted adaptor position, defined as the difference between the physical adaptor position and each subjects' individual baseline PSE (e.g., if the adaptor was +20 and the subject had a baseline PSE of +5, then adjusted adaptor position was +15); we adjusted the baseline individually because there was moderate variability across subjects in baseline PSE.
Figure 4
 
Results of Experiment 1, showing opponent coding (i.e., larger aftereffects for more extreme adaptor positions). (A) Aftereffect size for the three adaptor positions in both orientations, averaged across subjects. Error bars show ±1 SEM. (B) Scatter plot of individual subjects, showing aftereffect size against adjusted adaptor position (difference between physical adaptor value and the individual subject's baseline PSE pre-adaptation) for upright (N = 30) and inverted (N = 30) orientations, with best linear fits.
Figure 4
 
Results of Experiment 1, showing opponent coding (i.e., larger aftereffects for more extreme adaptor positions). (A) Aftereffect size for the three adaptor positions in both orientations, averaged across subjects. Error bars show ±1 SEM. (B) Scatter plot of individual subjects, showing aftereffect size against adjusted adaptor position (difference between physical adaptor value and the individual subject's baseline PSE pre-adaptation) for upright (N = 30) and inverted (N = 30) orientations, with best linear fits.
 
The same analyses were performed for upright faces. One-sample t-tests revealed significant aftereffects for all adaptor positions, +5, t(10) = 2.24, p < 0.05, +20, t(10) = 6.33, p < 0.001, and +50, t(10) = 6.88, p < 0.001. In Figure 5A, trend analysis revealed an increasing linear pattern, F(1, 29) = 24.77, MSE = 124.23, p < 0.001. In Figure 5B, there was a positive correlation between aftereffect size and adjusted adaptor position, r(58) = 0.72, p < 0.001. These results support the opponent model for upright faces and replicate previous findings (Robbins et al., 2007; Susilo et al., 2010).
Figure 5
 
Results of Experiment 2. Aftereffect size for the four adapt–test conditions (e.g., F–T means the adaptor was a face and the test items Ts) averaged across subjects for upright (left) and inverted (right). Results imply that upright aftereffects contain a large face-specific component (i.e., F–F is greater than F–T, and T–T is greater than T–F) but inverted aftereffects are shape-generic in origin (i.e., F–F is not greater than F–T, and T–T is not greater than T–F). Error bars show ±1 SEM.
Figure 5
 
Results of Experiment 2. Aftereffect size for the four adapt–test conditions (e.g., F–T means the adaptor was a face and the test items Ts) averaged across subjects for upright (left) and inverted (right). Results imply that upright aftereffects contain a large face-specific component (i.e., F–F is greater than F–T, and T–T is greater than T–F) but inverted aftereffects are shape-generic in origin (i.e., F–F is not greater than F–T, and T–T is not greater than T–F). Error bars show ±1 SEM.
 
We also compared the size of upright and inverted aftereffects. A three (+5, +20, +50) by two (upright, inverted) factorial ANOVA found a main effect of orientation, F(1, 59) = 4.62, MSE = 6.118, p < 0.05, showing that upright aftereffects (M = 3.71, SE = 0.54) were larger than inverted aftereffects (M = 2.23, SE = 0.54). No interaction was found, F < 1. We leave the discussion of this particular finding to the General discussion section. 
Discussion
Results of Experiment 1 demonstrate that inverted face aftereffects, like upright face aftereffects, derive from opponent coding. This finding indicates that inverted aftereffects are not qualitatively different from upright aftereffects in terms of coding strategy. However, it does not necessarily follow that upright and inverted face aftereffects are generated within a common multidimensional space. The possibility remains that, while both upright and inverted aftereffects show opponent coding, the particular “space” is different. For example, upright aftereffects could originate in a face space, while inverted aftereffects could come from a generic shape space that uses the component shapes of the image rather than representing shape as a deviation from a whole face. We test this possibility in Experiment 2
Experiment 2: Transfer of aftereffects between faces and T-shapes
Experiment 2 examined transfer of aftereffects between faces and T-shapes. Our zero-deviation T was matched in size to the T-shaped central region of the face (see Figure 1B). The Ts were then manipulated in a similar manner to our face stimuli by moving the horizontal bar up and down (see Figure 1C). Previous studies (O'Leary & McMahon, 1991; Regan & Hamstra, 1992) have shown that adaptation to a common manipulation type can transfer across the specific shape to which that manipulation is applied (e.g., adapting to a vertically elongated circle makes a square seem vertically compressed). 
Orientation of the stimuli was always matched (i.e., U–U or I–I). For each orientation, we examined the amount of transfer of adaptation between faces and T-shapes by comparing the size of the aftereffect when the other stimulus class was used as the test with a control condition in which the test class was the same as the adaptor. This resulted in four conditions: adapt face, test T (F–T) and its control adapt face, test face (F–F); and adapt T, test face (T–F) and its control adapt T, test T (T–T). We also compared the size of the aftereffect in the two control conditions (F–F and T–T). This was important because one might mistakenly deduce less transfer from stimulus A to stimulus B simply due to one stimulus being less sensitive to producing or displaying aftereffects in the first place. If we observe no difference between the control conditions, then this would indicate that both stimulus types are capable of displaying comparable aftereffects (although these may of course have different origins). Note that it was theoretically feasible that we would obtain comparable aftereffects given that our method matched the physical size of the deviations in the T stimuli to those in the face stimuli (i.e., the “zero” stimuli overlaid closely on each other, and the size of a “pixel” deviation in faces and Ts was identical, see Figures 1B and 1C). 
The predictions were given as follows. First, if an aftereffect derives purely from shape-generic components, then we should obtain complete transfer across stimulus classes, i.e., F–F = F–T and T–T = T–F. Second, if a face aftereffect derives purely from a face-specific face space, then adaptation to faces should produce no transfer to T-shapes, i.e., F–F > F–T and F–T = 0 (and T–F = 0). Third, if a face aftereffect derives from a combination of shape-generic and face-specific components, then an intermediate pattern should be observed in which adaptation to faces produces partial transfer to Ts, i.e., F–F > F–T (and potentially T–T > T–F) and also F–T > 0 and T–F > 0. If inverted and upright face aftereffects derive from different multidimensional spaces, with a specific face space tapped only by upright aftereffects, then we might predict that the first pattern would be obtained for inverted aftereffects, and either the second or the third for upright aftereffects. 
Methods
Participants
Six new Caucasians participated, all experienced psychophysical observers from the Australian National University community (age range: 20–31, 3 females) with normal or corrected-to-normal vision. Each was paid $80 for approximately 8 h of testing. 
Design
The experiment was a 4 (adapt–test condition: F–F, F–T, T–T, T–F) × 2 (orientation: upright, inverted) within-subjects design. Each subject received a different random order of the 8 conditions. The adaptor was a +50 pixel distortion, for both faces and Ts. 
Stimuli
Face stimuli were identical to those in Experiment 1. The zero T stimulus was the standard Arial font capital “T”; subjects' baseline PSEs also confirmed that this stimulus was perceived either as the most normal or very close to it. To make the manipulated Ts, the vertical bar was moved up (+) and down (−) using Adobe Photoshop CS2. A “pixel” was defined in reference to a face image sized 370 (vertical) × 310 (horizontal) pixels. This ensured that our physical manipulation of the T stimuli was identical to that of the faces; Figure 3B shows both stimulus types overlaid on top of one another at undistorted (+0 pixel) and adaptor (+50 pixels) values. The vertical bar of the T was shifted up and down in 29 levels (0, ±1, ±2, ±3, ±4, ±5, ±6, ±7, ±8, ±9, ±10, ±12, ±14, ±18, and ±24 pixels) to create the test values, the same test values used for faces (see Figure 3C). The vertical bar was shifted up to +50 pixels to create the adaptor. For presentation purposes, adaptor face/T was resized to 227 × 190 pixels (viewing angle of 7.9° vertical by 5.7° horizontal) and test faces/Ts to 298 × 250 pixels (10° vertical by 7.9° horizontal). 
Procedure
General testing procedure was identical to Experiment 1. For the conditions in which the test stimuli were Ts (T–T and F–T), the question was “Was the vertical bar on the T too high or too low?” Subjects were instructed to judge T-shapes based on comparison with their imagined average T. Subjects had at least a 24-h gap between any two adapt–test conditions, a time delay that has previously been demonstrated to prevent any carryover from the previous condition tested (Robbins et al., 2007; Susilo et al., 2010). 
Results
All 96 psychometric curves (6 subjects × 8 conditions, each with separate baseline and adapted curves) produced excellent fits, all R 2 > 0.95. Aftereffect results are shown in Figure 5. We first examined the two control conditions (F–F and T–T). Aftereffect magnitude for F–F and T–T was identical in the upright orientation, t(5) = 0.16, p = 0.88, and there was no significant difference in the inverted orientation, t(5) = 1.66, p = 0.16. These results argue that Ts were able to both produce and display a similar range of aftereffects as faces, consistent with expectations given that we had equated the eye-height and bar-height manipulations in terms of physical deviation. 
Turning to the key questions, a two-way ANOVA for stimulus condition (F–F, F–T, T–T, T–F) by orientation (upright, inverted) revealed a significant interaction, F(3, 15) = 5.50, MSE = 1.96, p = 0.009. This interaction reflected different patterns of transfer upright and inverted. For upright, results implied that aftereffects derive from a combination of both face-specific and shape-generic mechanisms. Demonstrating some face-specific component, aftereffects for F–F (M = 6.06, SE = 0.43) were larger than for F–T (M = 3.57, SE = 0.86), t(5) = 3.00, p < 0.05; also, aftereffects for T–T (M = 5.97, SE = 0.84) were larger than for T–F (M = 1.79, SE = 0.78), t(5) = 4.66, p < 0.01. Demonstrating some shape-generic component, substantial aftereffects were observed for transfer across stimulus types: one-sample, two-tailed t-tests revealed that aftereffects were significantly greater than zero for F–T, t(5) = 4.17, p = 0.009, and approached significance for T–F, t(5) = 2.28, p = 0.072. To calculate the relative proportions of the face-specific and shape-generic contributions, we computed, for each observer, the aftereffect in each transfer condition as a proportion of its relevant control condition (i.e., F–T as a proportion of F–F and T–F as a proportion of T–T). Averaging the resulting 12 scores (6 subjects × 2 proportion scores) indicated that 55% of the aftereffect for upright faces had a face-specific origin, while 45% had a shape-generic origin (i.e., was shared between faces and Ts). 
For inverted, results implied that aftereffects derive only from shape-generic mechanisms. Aftereffects for F–F (M = 4.55, SE = 0.51) were no different than for F–T (M = 4.63, SE = 1.09), t(5) = 0.07, p = 0.95, and aftereffects for T–T (M = 3.21, SE = 0.68) were no different than for T–F (M = 2.77, SE = 0.65), t(5) = 1.72, p = 0.15. Further, aftereffects in the two transfer conditions were both significantly greater than zero: for F–T, t(5) = 4.25, p = 0.008, and for T–F, t(5) = 4.26, p = 0.008. In contrast to the upright results, calculation of proportion-transfer scores indicated that 92% of the inverted face aftereffect was shape-generic, and virtually none (8%) was face-specific. 
The analysis above has treated the adaptor as the condition held constant (e.g., faces), and examined transfer of this constant adaptation to each type of test stimulus (i.e., faces and Ts). This follows the procedure used in previous face studies assessing transfer of adaptation (e.g., across orientations in Watson & Clifford, 2003, 2006). However, it could also be argued that perhaps one should keep the test condition constant and assess transfer via the effect of different adaptor conditions (i.e., compare T–T with F–T, and F–F with T–F). Results from this approach led to the same conclusions as previously, in both upright and inverted orientations. For upright, T–T was larger than F–T, t(5) = 4.31, p = 0.008, and F–F was larger than T–F, t(5) = 5.04, p = 0.004. Averaging the 12 proportion scores (i.e., T–F as a proportion of F–F, and F–T as a proportion of T–T) gave relative proportions of face-specific and shape-generic contributions of 57% and 43%, respectively. For inverted, T–T was not greater than F–T, t(5) = 1.11, p = 0.316 (indeed, the trend was in the wrong direction, see Figure 5), and F–F was numerically but not significantly greater than T–F, t(5) = 2.16, p = 0.08. Averaging the 12 proportion scores gave a face-specific contribution of <0% and a shape-generic contribution of >100% (and even removing one subject with an outlying result of F–T ≫ T–T gave a face-specific contribution of 5% and a shape-generic contribution of 95%). 
Discussion
Experiment 2 found that inverted faces showed almost complete (92%) transfer of aftereffects between faces and Ts, while upright faces showed a much smaller although significant shape-generic component (45%) together with a substantial face-specific component (55%) that was not shared with Ts. These results argue that, although both upright and inverted face aftereffects show opponent coding (Experiment 1), they derive from different stages of the visual system. Inverted face aftereffects derive from a shape-generic mechanism or mechanisms (of either mid- or high-level origin, an issue considered in the General discussion section). In contrast, upright face aftereffects derive partly from shape-generic mechanisms but also have a substantial component arising from a face-specific face space. 
Experiment 3: Coding model for T aftereffects
Experiment 2 results suggest that inverted eye-height aftereffects derive from shape-generic mechanisms that are shared with T-shapes. If this is correct, then an essential prediction is that T aftereffects must, like inverted face aftereffects, show opponent coding. This seems plausible in that several studies have indicated opponent rather than multichannel coding for other types of basic shape dimensions (Kayaert et al., 2005; Pasupathy & Connor, 2001; Suzuki, 2005), leading Kayaert et al. to suggest that multidimensional shape space uses norm-based (i.e., opponent) coding. The aim of Experiment 3 was to test whether bar height in T-shapes, and particularly inverted T-shapes, is coded in an opponent or multichannel manner. 
Following the logic of Experiment 1, we tested adaptor positions varying in distance from the average, across the same range of manipulation as was applied to our faces. Experiment 3 tested our two more extreme adaptor positions of +20 and +50 pixels, for both upright and inverted Ts. These positions were selected because it is predictions for extreme values that most clearly dissociate opponent and multichannel models. To confirm our findings, Experiment 3 focused on inverted Ts only and tested all three of our adaptor positions (+5, +20, +50). Our proposal that inverted face aftereffects derive primarily from shape-generic mechanisms that also code T-shapes requires that we should always observe aftereffects that increase with increasing distance of the adaptor from the average T (Figure 3A). In contrast, if we find a decreasing or peaked pattern (Figure 3B), this would support multichannel coding and would thus refute our proposal. 
Methods
Participants
Experiment 3 subjects were 4 new Caucasian students from the Australian National University (age range: 24–28, 1 female) paid $40 for approximately 4 h of testing. Experiment 3 subjects were three experienced psychophysical observers (including the first author, age range: 28–34, 1 female) who were voluntarily tested for approximately 3 h per subject. All reported normal or corrected-to-normal vision. 
Design, stimuli, and procedure
Experiment 3 was a 2 (adaptor position: +20, +50) × 2 (orientation: upright, inverted) within-subjects design. Experiment 3 tested each subject on all three inverted T conditions (+5, +20, and +50). Each subject received a different random order of conditions, with delay of at least 24 h between each. We used the same T-shape stimuli and testing procedure as for the T–T condition of Experiment 2
Results
All 32 psychometric curves for Experiment 3 (4 subjects × 4 conditions, each with separate baseline and adapted curves) produced excellent fits, all R 2 > 0.93. The same was true for the 18 curves in Experiment 3 (3 subjects × 3 conditions, each with baseline and adapted curves), all R 2 > 0.94. 
Aftereffect results are shown in Figure 6. For both inverted and upright T-shapes, results showed aftereffects increasing with adaptor position, indicating opponent rather than multichannel coding. For upright (Experiment 3 only), aftereffects at +50 (M = 6.46, SE = 0.82) were larger than at +20 (M = 3.12, SE = 0.66), t(3) = 5.55, p = 0.01. For inverted, in Experiment 3, aftereffects at +50 (M = 4.21, SE = 0.97) were larger than at +20 (M = 0.43, SE = 0.58), t(3) = 4.42, p < 0.05. In Experiment 3, aftereffects at +50 (M = 3.91, SE = 0.78) were larger than at +20 (M = 0.97, SE = 0.26), t(2) = 6.55, p = 0.02, which in turn were larger than at +5 (M = −0.02, SE = 0.09), t(2) = 5.12, p = 0.03.
Figure 6
 
Results of Experiment 3, showing aftereffect size for the adaptor positions averaged across subjects and indicating opponent coding (i.e., larger aftereffects for more extreme adaptor positions) for T-shapes. (a) Results of Experiment 3, testing upright and inverted Ts at the two more extreme adaptor positions. (b) Results of Experiment 3, testing inverted Ts at all three of our adaptor positions (i.e., covering the same range as tested for faces in Experiment 1). Error bars show ±1 SEM.
Figure 6
 
Results of Experiment 3, showing aftereffect size for the adaptor positions averaged across subjects and indicating opponent coding (i.e., larger aftereffects for more extreme adaptor positions) for T-shapes. (a) Results of Experiment 3, testing upright and inverted Ts at the two more extreme adaptor positions. (b) Results of Experiment 3, testing inverted Ts at all three of our adaptor positions (i.e., covering the same range as tested for faces in Experiment 1). Error bars show ±1 SEM.
 
In a final analysis, we examined inversion effects on the size of T-shape aftereffects. To maximize power, we combined data from Experiments 2 and 3A, to give 10 subjects who completed an identical condition: T–T with +50 adaptor. For this condition, aftereffects were significantly smaller for the inverted orientation (M = 3.61, SE = 0.58) than for upright (M = 6.16, SE = 0.55), t(9) = 3.69, p < 0.01. The implication of this observation is considered in the General discussion section. 
Discussion
Experiment 3 revealed opponent coding for bar height in inverted T-shapes. Given that Experiment 1 showed opponent coding for eye height in inverted faces, this finding is consistent with our proposal that our inverted face aftereffects derive entirely from shape-generic mechanisms that also code T-shapes. Experiment 3 also supported opponent coding for upright T-shapes. This argues that a generic T-shape coding mechanism is a plausible origin of the shape-generic components of upright face aftereffects observed in Experiment 2
General discussion
The aim of the present study was to ask whether there is a fundamental difference between upright and inverted face aftereffects. Using an eye-height manipulation, Experiment 1 showed upright and inverted eye-height aftereffects both derived from opponent (norm-based) coding. Experiment 2 revealed that inverted-face eye-height aftereffects showed almost complete transfer to bar height in simple T-shapes (92%), while upright-face eye-height aftereffects showed only partial transfer to T-shapes (45%) with the remainder face-specific (55%). Experiment 3 found opponent coding of bar height in both inverted and upright T-shape aftereffects. We discuss these findings in the context of the interrelated questions we posed in the Introduction section: (i) whether upright and inverted aftereffects are qualitatively different, (ii) why inverted face aftereffects have looked similar to upright face aftereffects in previous studies, and (iii) whether it is a valid assumption that upright face aftereffects derive from, and thus can be used as tools to inform us about, face space. 
Is there any qualitative difference between upright and inverted face aftereffects?
The present study found that despite their apparent similarity in previous studies, upright and inverted face aftereffects are fundamentally different. Specifically, although both upright and inverted aftereffects follow an opponent coding model, the aftereffects in the two orientations derive from different stages in the visual system. The almost complete transfer between faces and T-shapes in the inverted orientation implies that inverted face aftereffects derive only from shape-generic mechanisms, while the partial transfer between faces and T-shapes in the upright orientation implies that upright face aftereffects originate from a combination of shape-generic and face-specific mechanisms. Further, the opponent coding of T-shapes confirms that generic T-shape coding mechanisms are indeed a plausible origin of the shape-generic component. 
These results are consistent with the idea that upright aftereffects derive from both holistic face-specific and part-based shape-generic contributions, while inverted aftereffects derive only from the part-based shape-generic system (cf. Guo et al., 2009; Watson & Clifford, 2003, 2006). They are inconsistent with another proposal suggesting that both upright and inverted aftereffects derive from the same face system that merely codes inverted faces with less sensitivity than upright faces (Guo et al., 2009; Watson & Clifford, 2006). 
We have therefore presented a solution to the puzzle of inverted face aftereffects. Our study shows that the face aftereffect literature can be consistent with evidence of qualitative differences between upright and inverted face processing obtained using other paradigms in the face perception literature. These include behavioral studies of holistic processing, neuropsychological studies showing double dissociation, and fMRI studies suggesting functional dissociations of upright and inverted faces between different cortical regions (Duchaine et al., 2006; Epstein et al., 2005; Moscovitch et al., 1997; Tanaka & Farah, 1993; Young et al., 1987; Yovel & Kanwisher, 2005). As such, the current study brings the face aftereffect literature closer to the literature on holistic/configural processing and inversion effects in general. 
Why have inverted face aftereffects looked similar to upright face aftereffects?
The present study also explains why inverted aftereffects have looked similar to upright aftereffects in previous studies. There were two observations to be explained: the large size of inverted face aftereffects; and the occurrence of such aftereffects for all manipulation types tested to date (e.g., figural, gender, identity, etc). Regarding size, inverted face aftereffects across studies (e.g., present Experiment 1, Rhodes et al., 2009; Webster & MacLin, 1999) range from approximately 50% of upright aftereffects to more than 100%. This large size is a natural outcome of our finding that inverted face aftereffects derive from opponent coding (Experiment 1), together with the fact that previous studies have used adaptor positions that are relatively far from average, resulting in adaptors that look very distorted (see, for example, Figure 1 of Webster & MacLin, 1999, and Figure 1A of Rhodes et al., 2004) or use high identity strength of the “anti-face” adaptor (e.g., Leopold et al., 2001). Opponent coding predicts larger aftereffects as the distance between the adaptor and the average increases, so these far-from-average adaptors will produce substantial aftereffects for inverted faces. Moreover, because upright face aftereffects also derive from opponent coding, and because all studies used the same physical distortion level for inverted adaptors as for upright adaptors, the inverted face aftereffects would be predicted to be of the same order of magnitude as the upright face aftereffects (although they may differ in exact size—see Quantitative comparisons of upright and inverted aftereffects section). 
We now turn to the occurrence of inverted face aftereffects for all manipulation types tested to date. Our explanation of this broad scope is given as follows. For eye height, our results imply that eye-height inverted face aftereffects originate in a generic representation of T-shapes (Experiment 2) that uses opponent coding (Experiment 3). However, previous studies have also demonstrated or implied opponent coding of many other basic shape properties. Aftereffects occur for shape properties including convexity–concavity (Regan & Hamstra, 1992) and aspect ratio (Suzuki, 2005). Single-cell studies in monkeys have also reported opponent-like, monotonic tuning for whether a shape (e.g., a square) tapers toward the top or the bottom has left versus right curvature of the main axis and has outward versus inward curvature of the sides (Kayaert et al., 2005; Pasupathy & Connor, 2001). Putting these findings together argues that the visual processing stream includes a multidimensional shape space (or possibly more than one such space), used for representing component shapes of many different objects. Activation of this space by inverted faces would then produce aftereffects for many different distortion types. For example, inverted aftereffects to global expansion–contraction could be explained by adaptation of three-dimensional convexity–concavity, while inverted identity aftereffects could be explained by adaptation to component shapes within the face (e.g., the amount by which the nose tapers toward the top relative to the bottom, the aspect ratio of an eye, etc). Note that we make no claims as to whether the shape space that supports inverted face aftereffects derives from mid-level or high-level vision. Single-cell studies in monkeys found opponent shape coding in both mid-level areas (V4, Pasupathy & Connor, 2001) and in high-level areas (inferotemporal cortex, Kayaert et al., 2005), and fMRI studies in humans reported stronger response for inverted relative to upright faces in both mid-level (Gilaie-Dotan, Gelbard-Sagiv, & Malach, 2010) and high-level areas (Aguirre et al., 1999; Epstein et al., 2005; Haxby et al., 1999). Based on current evidence, therefore, either a mid-level and/or a high-level origin of inverted face aftereffects remain plausible. 
Do upright face aftereffects provide a valid tool to study face space?
Previous authors have noted that because upright faces activate many stages of the visual system, face aftereffects could derive from a combination of all these stages (e.g., Yamashita et al., 2005; Zhao & Chubb, 2001). Indeed, it has been demonstrated that there is some degree of retinotopy in upright face aftereffects (Afraz & Cavanagh, 2008), a result that previous face aftereffect studies have taken as evidence of a low-level contribution (e.g., Zhao & Chubb, 2001; but see Hemond, Kanwisher, & Op de Beeck, 2007 for fMRI evidence that some degree of retinotopy is retained even in high-level areas). However, the fact that a substantial proportion of a face aftereffect survives manipulations of size and other low-level image statistics implies that much of the aftereffect must derive either from mid-level and/or high-level visual areas. 
A critical question is how much of an upright face aftereffect derives from face space: that is, from a high-level representation of face structure that codes the dimensions needed to individuate faces but not other objects. In previous studies, it has been widely assumed that face aftereffects derive largely from face space, and thus that face aftereffects can be used as a tool to investigate the properties of face space; yet these assumptions appear challenged by the existence of similar aftereffects for inverted faces. Against this backdrop, the present study argues that the traditional assumptions are to a large extent valid. We found that more than half of our upright face aftereffect (∼55%) was face-specific, arguing for an origin within face space. This implies that researchers can continue to use face aftereffects as a paradigm to investigate theoretical questions about face space, albeit with one important qualification. 
We also found that part of our upright face aftereffect (∼45%) was not face-specific and presumably derived from some multidimensional “shape space.” This suggests that, in general, upright face aftereffects have both face-space and shape-space contributions. This would mean that properties concluded from studies of upright aftereffects will to some extent reflect properties of shape space rather than face space. Moreover, it is possible that the proportion of the aftereffect coming from face space may vary across different types of face manipulations. There is no guarantee that the 55/45 ratio reported here for eye height would apply to other common manipulation types (e.g., contracted–expanded or Dan–antiDan identity manipulations). Whenever possible, it may be necessary to isolate the face-specific component of the aftereffect to be confident that a result (e.g., similar face aftereffects in children and in adults) truly reflect the properties of face space rather than some other components of the aftereffect. The present study provides a means for doing this, at least for eye-height aftereffects: subtracting F–T from F–F gives us an estimate of the face-specific component. This approach could be of value in future studies. 
Quantitative comparisons of upright and inverted aftereffects
A subsidiary finding of our study was that inverted eye-height aftereffects were significantly smaller in magnitude, for the same fixed physical adaptor values, than upright eye-height aftereffects. There are two points of discussion regarding this result. First, both the smaller aftereffects inverted and the well-established previous findings of poorer discrimination of inverted than upright (i.e., in pairs that differ by a fixed amount of eye height, e.g., Sekunova & Barton, 2008; Susilo et al., 2010) can be explained by our evidence that the inverted aftereffects arose purely from shape space. That is, our results suggest that coding, which uses the component shapes of the image (i.e., “shape space”), leads to lower sensitivity in overall coding than representing shape as a deviation from a whole face (i.e., “face space”). 
Second, the amount by which inverted aftereffects are smaller than upright may provide an indirect, but potentially useful, measure of the proportion of the aftereffect coming from face space. Consistent with this idea, the literature contains at least some suggestion that the I–U proportion could be correlated with the “face specificity” of the manipulation type. 2 (We limit the studies considered to those with moderate sample sizes; other studies found different results but reported inverted data for only two subjects; Leopold et al., 2001; Watson & Clifford, 2006.) Global expansion or contraction of all regions of the image is a very generic type of manipulation that can easily be applied to non-face objects—that is, the type of manipulation makes no reference to face structure per se. Correspondingly, the expansion–contraction manipulation has produced aftereffects that show only a weak influence of inversion (inverted 80% of upright in Webster & MacLin, 1999; 83% in Watson and Clifford, 2003), arguing for a largely shape-space origin with only a small contribution from face space. Our eye-height manipulation, in contrast, makes direct reference to face structure and can be applied only to other objects where these also contain a central symmetric T-like structure; correspondingly, we found a substantial inversion effect (inverted aftereffect 60% of upright, see Experiment 1). Finally, other studies have used the identity aftereffect. The identity manipulation is defined with reference to a structure that exists only for faces—that is, multiple individual faces are morphed together to create an average face and “antiDan” is then created by morphing in a trajectory from “Dan” through the average to the other side of the space. Using this manipulation, Rhodes et al. (2009) reported the largest inversion effect to date (inverted aftereffect only approximately 50% of upright). 
Thus, in future studies it may be valuable to consider the U–I difference as a proxy for the amount of face specificity of the aftereffect. Note, however, that the validity of doing so depends on how one interprets our finding that aftereffects were also significantly larger for upright than inverted Ts (see Experiment 3). Our logic above presumes that shape space shows no orientation sensitivity at all. This assumption could be compatible with our observed orientation sensitivity for T aftereffects if it were argued that upright T aftereffects derive from a combination of a “letter space” and generic shape space: this idea is not implausible given that upright letters are highly familiar stimuli and produce strong activation in left-hemisphere high-level visual regions that are sensitive to word structure and so are not shape-generic (Baker et al., 2007). However, the orientation sensitivity for Ts could alternatively reflect orientation sensitivity of shape-generic aftereffects (a result potentially consistent with findings that inversion effects on memory and discrimination are not usually zero for non-face objects, only smaller than for faces; for reviews, see McKone & Robbins, in press; Rossion, 2008). Overall, we conclude that for many manipulation types it may be more complex than it first appears to determine what proportion of the upright face aftereffect derives from face space. In the present study, we have been able to achieve this for eye height, but only because there exists a simple shape manipulation (bar height in a T) that fully captures the type of manipulation made to the face, allowing us to test transfer of face adaptation to this test stimulus. 
The relationship between face space and holistic processing
Finally, the present study speaks to the broader issue of theoretical links between the concepts of face space and holistic processing. Both concepts have frequently been used to explain how it is that most adult humans are so good at individuating faces. However, as recently noted (McKone, 2009), there has been little theoretical contact between them in the literature. Face space is traditionally referred to when researchers explain why identity discrimination is better for some faces than for others (e.g., distinctive versus typical faces, Valentine & Bruce, 1986; own-race versus other-race faces, Valentine & Endo, 1992), while holistic processing is traditionally referred to when researchers explain why identity discrimination for potentially exactly the same faces is better when upright than inverted, or better than identity discrimination for non-face objects (e.g., Maurer, Le Grand, & Mondloch, 2002; Yin, 1969). Yet both face space and holistic processing, in the final analysis, seek to explain exactly the same phenomenon of how humans individuate faces. So what is the relationship between face space and holistic processing? 
One possibility is that face space and holistic processing are fundamentally different constructs and contribute independently to face recognition ability, either as parallel or sequential modules of processing. The previous evidence apparently suggesting a common mechanism for upright and inverted face aftereffects could have been taken as supporting this view, in that the findings suggested face space coded both upright and inverted faces, which contrasted to the extensive evidence that holistic processing is limited to upright (for reviews, see Maurer et al., 2002; McKone, Kanwisher, & Duchaine, 2007; Rossion, 2008). An alternative possibility is that face space and holistic processing are essentially the same construct and both derive from the same processing stage, a view that is potentially consistent with our current results. We found that inverted aftereffects derive from shape-generic mechanisms, while only upright aftereffects derive partly from face space. These results imply that face space, like holistic processing, is strongly sensitive to orientation, and so provide support to the idea that face space and holistic processing could essentially be the same construct. 
Conclusion
The current study presents a solution to the problem of inverted face aftereffects. Using eye-height aftereffects, we revealed that inverted face aftereffects are generated by shape-generic mechanisms, while upright face aftereffects derive from both shape-generic and face-specific mechanisms. We also found that coding along dimensions in both shape-generic and face-specific space follow the predictions of an opponent model. Our results imply that upright face aftereffects can be used as a tool to investigate theoretical questions about the perceptual and neural properties of face space, but with the important caveat that part of upright aftereffects derive from generic shape space. In demonstrating a fundamental effect of inversion on the origin of face aftereffects, our study also brings the face space literature closer to the extensive literatures on holistic/configural processing in faces, neuroimaging of faces, and neuropsychological dissociations in prosopagnosia. 
Acknowledgments
This research was supported by Australian Research Council grants DP0450636 and DP0984558 to EM and ME. TS is grateful for scholarship support from ANU Center for Visual Sciences and overseas student fee waiver from ANU Department of Psychology. 
Commercial relationships: none. 
Corresponding author: Tirta Susilo. 
Address: Department of Psychology, Australian National University, Canberra, ACT 0200, Australia. 
Footnotes
Footnotes
1  All previous face aftereffect studies have also used physically identical adaptors upright and inverted: as in all literature on other types of face inversion effects, the theoretical interest is in the perceptual differences that arise despite the face stimuli being physically matched in both orientations.
Footnotes
2  This pattern of different inversion effect size also argues against a general attentional account of inversion effects on face aftereffect magnitude. Face aftereffects are reduced when subjects attend less to the face (Moradi, Koch, & Shimojo, 2005), and it is plausible that subjects attend less to inverted faces than to upright faces. However, this account would predict equal inversion effect size regardless of manipulation type, because there is no reason to expect that inversion effects on attention would be modulated by the type of face manipulation.
References
Afraz S.-R. Cavanagh P. (2008). Retinotopy of the face aftereffect. Vision Research, 48, 42–54. [CrossRef] [PubMed]
Aguirre G. K. Singh R. D'Esposito M. (1999). Stimulus inversion and the responses of face and object-sensitive cortical areas. Neuroreport, 10, 189–194. [CrossRef] [PubMed]
Anderson N. D. Wilson H. R. (2005). The nature of synthetic face adaptation. Vision Research, 45, 1815–1828. [CrossRef] [PubMed]
Baker C. I. Liu J. Wald L. Kwong K. Benner T. Kanwisher N. (2007). Visual word processing and experiential origins of functional selectivity in human extrastriate cortex. Proceedings of the National Academy of Sciences, 104, 9087–9092. [CrossRef]
Behrmann M. Avidan G. Marotta J. J. Kimchi R. (2005). Detailed exploration of face-related processing in congenital prosopagnosia: 1. Behavioral findings. Journal of Cognitive Neuroscience, 17, 1130–1149. [CrossRef] [PubMed]
Calder A. J. Jenkins R. Cassel A. Clifford C. W. (2008). Visual representation of eye gaze is coded by a nonopponent multichannel system. Journal of Experimental Psychology: General, 137, 244–261. [CrossRef] [PubMed]
Clifford C. W. Wenderoth P. Spehar B. (2000). A functional angle on some after-effects in cortical vision. Proceedings of the Royal Society of London B, 267, 1705–1710. [CrossRef]
Cohen J. D. MacWhinney B. Flatt M. Provost J. (1993). PsyScope: A new graphic interactive environment for designing psychology experiments. Behavior Research Methods, Instruments, & Computers, 25, 257–271. [CrossRef]
Duchaine B. Yovel G. Butterworth E. J. Nakayama K. (2006). Prosopagnosia as an impairment to face-specific mechanisms: Elimination of the alternative hypotheses in a developmental case. Cognitive Neuropsychology, 23, 714–747. [CrossRef] [PubMed]
Epstein R. A. Higgins J. S. Parker W. Aguirre G. K. Cooperman S. (2005). Cortical correlates of face and scene inversion: A comparison. Neuropsychologia, 44, 1145–1158. [CrossRef] [PubMed]
Fang F. He S. (2005). Viewer-centered object representation in the human visual system revealed by viewpoint aftereffects. Neuron, 45, 793–800. [CrossRef] [PubMed]
Freiwald W. A. Tsao D. Y. Livingstone M. S. (2009). A face feature space in the macaque temporal lobe. Nature Neuroscience, 12, 1187–1196. [CrossRef] [PubMed]
Gilaie-Dotan S. Gelbard-Sagiv H. Malach R. (2010). Perceptual shape sensitivity to upright and inverted faces is reflected in neuronal adaptation. Neuroimage, 50, 383–395. [CrossRef] [PubMed]
Goffaux V. Rossion B. (2007). Face inversion disproportionately impairs the perception of vertical but not horizontal relations between features. Journal of Experimental Psychology: Human Perception and Performance, 33, 995–1001. [CrossRef] [PubMed]
Guo X. M. Oruc I. Barton J. J. S. (2009). Cross-orientation transfer of adaptation for facial identity is asymmetric: A study using contrast-based recognition thresholds. Vision Research, 49, 2254–2260. [CrossRef] [PubMed]
Haxby J. V. Ungerleider L. G. Clark V. P. Schouten J. L. Hoffman E. A. Martin A. (1999). The effect of face inversion on activity in human neural systems for face and object perception. Neuron, 22, 189–199. [CrossRef] [PubMed]
Hemond C. Kanwisher N. Op de Beeck H. P. (2007). A preference for contralateral stimuli in human object- and face-selective cortex. PLoS ONE, 2, e574. [CrossRef] [PubMed]
Hills P. J. Holland A. M. Lewis M. B. (2010). Aftereffects for face attributes with different natural variability: Children are more adaptable than adolescents. Cognitive Development, 25, 278–289. [CrossRef]
Jaquet E. Rhodes G. Hayward W. G. (2007). Opposite aftereffects for Chinese and Caucasian faces are selective for social category information and not just physical face differences. Quarterly Journal of Experimental Psychology, 60, 1457–1467. [CrossRef]
Jeffery L. McKone E. Haynes R. Firth E. Pellicano E. Rhodes G. (2010). Four-to-six year old children use norm-based coding in face-space. Journal of Vision, 10, (5):18, 1–19, http://www.journalofvision.org/content/10/5/18, doi:10.1167/10.5.18. [PubMed] [Article] [CrossRef] [PubMed]
Jenkins R. Beaver J. D. Calder A. J. (2006). I thought you were looking at me: Direction-specific aftereffects in face perception. Psychological Science, 17, 506–513. [CrossRef] [PubMed]
Kayaert G. Biederman I. Op de Beeck H. P. Vogels R. (2005). Tuning for shape dimensions in macaque inferior temporal cortex. European Journal of Neuroscience, 22, 212–224. [CrossRef] [PubMed]
Lawson R. P. Clifford C. W. G. Calder A. J. (2009). About turn: The visual representation of human body orientation revealed by adaptation. Psychological Science, 20, 363–371. [CrossRef] [PubMed]
Leopold D. A. Bondar I. V. Giese M. A. (2006). Norm-based face encoding by single neurons in the monkey inferotemporal cortex. Nature, 442, 572–575. [CrossRef] [PubMed]
Leopold D. A. O'Toole A. J. Vetter T. Blanz V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [CrossRef] [PubMed]
Little A. C. DeBruine L. M. Jones B. C. (2005). Sex-contingent face after-effects suggest distinct neural populations code male and female faces. Proceedings of the Royal Society of London B, 272, 2283–2287. [CrossRef]
Maurer D. Le Grand R. Mondloch C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [CrossRef] [PubMed]
McKone E. (2009). Integrating holistic processing and face-space approaches to the coding of facial identity [Abstract]. Journal of Vision, 9, (8):539, 539a, http://www.journalofvision.org/content/9/8/539, doi:10.1167/9.8.539. [CrossRef]
McKone E. Aitkin A. Edwards M. (2005). Categorical and coordinate relations in faces, or Fechner's law and face space instead? Journal of Experimental Psychology: Human Perception and Performance, 31, 1181–1198. [CrossRef] [PubMed]
McKone E. Kanwisher N. Duchaine B. (2007). Can generic expertise explain special processing for faces? Trends in Cognitive Sciences, 11, 8–15. [CrossRef] [PubMed]
McKone E. Martini P. Nakayama K. (2001). Categorical perception of face identity in noise isolates configural processing. Journal of Experimental Psychology: Human Perception and Performance, 27, 573–599. [CrossRef] [PubMed]
McKone E. Robbins R. (in press). Are faces special? In Calder A. J. Rhodes G. Johnston M. H. Haxby J. V. (Eds.), Handbook of face perception. Oxford, UK: Oxford University Press.
Moradi F. Koch C. Shimojo S. (2005). Face adaptation depends on seeing the face. Neuron, 45, 169–175. [CrossRef] [PubMed]
Moscovitch M. Winocur G. Behrmann M. (1997). What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9, 555–604. [CrossRef] [PubMed]
Nishimura M. Doyle J. Humphreys K. Behrmann M. (2010). Probing the face-space of individuals with prosopagnosia. Neuropsychologia, 48, 1828–1841. [CrossRef] [PubMed]
O'Leary A. McMahon M. (1991). Adaptation to form distortion of a familiar shape. Perception & Psychophysics, 49, 328–332. [CrossRef] [PubMed]
Pasupathy A. Connor C. E. (2001). Shape representation in area V4: Position-specific tuning for boundary conformation. Journal of Neurophysiology, 86, 2505–2519. [PubMed]
Pellicano E. Jeffery L. Burr D. Rhodes G. (2007). Abnormal adaptive face-coding mechanisms in children with Autism Spectrum Disorder. Current Biology, 17, 1–5. [CrossRef] [PubMed]
Quiroga R. Q. Reddy L. Kreiman G. Koch C. Fried I. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435, 1102–1107. [CrossRef] [PubMed]
Regan D. Hamstra S. J. (1992). Shape discrimination and the judgement of perfect symmetry: Dissociation of shape from size. Vision Research, 32, 1845–1864. [CrossRef] [PubMed]
Rhodes G. Evangelista E. Jeffery L. (2009). Orientation-sensitivity of face identity aftereffects. Vision Research, 49, 2379–2385. [CrossRef] [PubMed]
Rhodes G. Jeffery L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46, 2977–2987. [CrossRef] [PubMed]
Rhodes G. Jeffery L. Watson T. L. Jaquet E. Winkler C. Clifford C. W. G. (2004). Orientation-contingent face aftereffects and implications for face-coding mechanisms. Current Biology, 14, 2119–2123. [CrossRef] [PubMed]
Rhodes G. Leopold D. A. (in press). Adaptive norm-based coding of face identity. In Calder, A. J. Rhodes G. Johnston, M. H. Haxby J. V. (Eds.), Handbook of face perception. Oxford, UK: Oxford University Press.
Robbins R. Heck P. (2009). Brad Pitt & Jude Law: Individual-contingent face aftereffects and norm- versus exemplar-based models of face-space [Abstract]. Journal of Vision, 9, (8):516, 516a, http://www.journalofvision.org/content/9/8/516, doi:10.1167/9.8.516. [CrossRef]
Robbins R. McKone E. Edwards M. (2007). Aftereffects for face attributes with different natural variability: Adaptor position effects and neural models. Journal of Experimental Psychology: Human Perception and Performance, 33, 570–592. [CrossRef] [PubMed]
Rossion B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 123, 274–289. [CrossRef]
Schiltz C. Rossion B. (2006). Faces are represented holistically in the human occipito-temporal cortex. Neuroimage, 32, 1385–1394. [CrossRef] [PubMed]
Sekunova A. Barton J. J. (2008). The effects of face inversion on the perception of long-range and local spatial relations in eye and mouth configurations. Journal of Experimental Psychology: Human Perception and Performance, 34, 1129–1135. [CrossRef] [PubMed]
Susilo T. McKone E. Edwards M. (2010). What shape are the neural response functions underlying opponent coding in face space? A psychophysical investigation. Vision Research, 50, 300–314. [CrossRef] [PubMed]
Suzuki S. (2005). High-level pattern coding revealed by brief shape aftereffects. In Clifford C. W. G. Rhodes G. (Eds.), Fitting the mind to the world: Adaptation and after-effects in high-level vision (pp. 135–172). Oxford, UK: Oxford University Press.
Tanaka J. W. Farah M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology A, 46, 225–245. [CrossRef]
Valentine T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology A, 43, 161–204. [CrossRef]
Valentine T. Bruce V. (1986). The effect of distinctiveness in recognising and classifying faces. Perception, 15, 525–535. [CrossRef] [PubMed]
Valentine T. Endo M. (1992). Towards an exemplar model of face processing: The effects of race and distinctiveness. Quarterly Journal of Experimental Psychology A, 44, 671–703. [CrossRef]
Watson T. L. Clifford C. W. G. (2003). Pulling faces: An investigation of the face-distortion aftereffect. Perception, 32, 1109–1116. [CrossRef] [PubMed]
Watson T. L. Clifford C. W. G. (2006). Orientation dependence of the orientation-contingent face aftereffect. Vision Research, 46, 3422–3429. [CrossRef] [PubMed]
Webster M. A. MacLin O. H. (1999). Figural aftereffects in the perception of faces. Psychonomic Bulletin and Review, 6, 647–653. [CrossRef] [PubMed]
Wichmann F. A. Hill N. J. (2001). The psychometric function: 1. Fitting, sampling, and goodness-of-fit. Perception & Psychophysics, 63, 1293–1313. [CrossRef] [PubMed]
Yamashita J. A. Hardy J. L. DeValois K. K. Webster M. A. (2005). Stimulus selectivity of figural aftereffects for faces. Journal of Experimental Psychology: Human Perception and Performance, 31, 420–437. [CrossRef] [PubMed]
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Young A. W. Hellawell D. Hay D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. [CrossRef] [PubMed]
Yovel G. Kanwisher N. (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15, 2256–2262. [CrossRef] [PubMed]
Zhao L. Chubb C. (2001). The size-tuning of the face-distortion after-effect. Vision Research, 41, 2979–2994. [CrossRef] [PubMed]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×