December 2019
Volume 19, Issue 14
Open Access
Article  |   December 2019
Are expression aftereffects fully explained by tilt adaptation?
Author Affiliations
  • Derek C. Swe
    ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Perth, Australia
  • Nichola S. Burton
    ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Perth, Australia
  • Gillian Rhodes
    ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Perth, Australia
Journal of Vision December 2019, Vol.19, 21. doi:https://doi.org/10.1167/19.14.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Derek C. Swe, Nichola S. Burton, Gillian Rhodes; Are expression aftereffects fully explained by tilt adaptation?. Journal of Vision 2019;19(14):21. doi: https://doi.org/10.1167/19.14.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Facial expressions are used as critical social cues in everyday life. Adaptation to expressions causes expression aftereffects. These aftereffects are thought to reflect the operation of face-selective neural mechanisms, and are used by researchers to investigate the nature of those mechanisms. However, recent evidence suggests that expression aftereffects could be at least partially explained by the inheritance of lower-level tilt adaptation through the visual hierarchy. We investigated whether expression aftereffects could be entirely explained by tilt adaptation. Participants completed an expression adaptation task in which we controlled for the influence of tilt by changing the orientation of the adaptor relative to the test stimuli. Although tilt adaptation appeared to make some contribution to the expression aftereffect, robust expression aftereffects still remained after minimizing tilt inheritance, indicating that expression aftereffects cannot be fully explained by tilt adaptation. There was also significant reduction in the expression aftereffects after inverting the adapting face, providing evidence that face-selective processing is involved in these aftereffects.

Introduction
Facial expressions convey important social information about emotional state, how a person is feeling, and the behavioral intentions of a person (Horstmann, 2003). Expressions serve as a form of social communication, as the expressions that a person conveys affect how other people respond to them (Adams, Ambady, Macrae, & Kleck, 2006; Marsh, Ambady, & Kleck, 2005; Seidel, Habel, Kirschner, Gur, & Derntl, 2010). Processing expressions is therefore important, as the ability to identify expressions and interpret them are key factors in forming and maintaining human relationships, effective social interaction, and effective communication (Riggio, 2014). Moreover, deficits in recognizing and conveying expressions can have detrimental social outcomes (Kornreich et al., 2002). 
Visual adaptation is a powerful tool that can be used to investigate expression processing, and is widely used to research neural representation of faces (Leopold, O'Toole, Vetter, & Blanz, 2001; Webster, 2015; Webster & MacLeod, 2011). In an expression adaptation paradigm, individuals are shown faces, which biases their perception of subsequent faces. For example, after adapting to a happy face, the next face that the individual sees appears less happy (Fox & Barton, 2007). These adaptation paradigms have revealed important aspects of expression processing, such as prototype-referenced coding of expressions (Burton, Jeffery, Calder, & Rhodes, 2015; Burton, Jeffery, Skinner, Benton, & Rhodes, 2013; Skinner & Benton, 2010), interdependent processing of expression with sex or race (Bestelmeyer, Jones, DeBruine, Little, & Welling, 2010), and the nonindependence of face expression and identity coding (Rhodes et al., 2015). 
Faces are high-level, complex visual stimuli, and so expression aftereffects, along with other face aftereffects, have generally been understood to be generated at high levels of the visual hierarchy. Supporting this view, expression aftereffects have a strong positive relationship with expression recognition ability (Palermo et al., 2017; Rhodes et al., 2015). Nevertheless, facial expressions also contain relatively simple deformations of the face, based on curves and lines. For example, in the folded face illusion, by simply folding a picture of a face three times to form a distorted “W” shape, either a happy face or a sad face can be seen depending on whether the face is viewed from above the eye-line or below the eye-line (Benton, 2009). To the extent that low-level contours affect expression perception, low-level adaptation will contribute to expression aftereffects. Indeed, tilt adaptation can elicit expression aftereffects, as adaptation to curved lines (Xu, Dayan, Lipkin, & Qian, 2008), and tilted line gratings (Dickinson, Mighall, Almeida, Bell, & Badcock, 2012) have been shown to generate expression aftereffects in both cartoon and real faces. 
These low-level aftereffects originate in neurons in the early visual cortex, which are size, position, and orientation dependent (Knapen, Rolfs, Wexler, & Cavanagh, 2009). In contrast, face aftereffects can be robust to variations in these properties between the adapting and test faces (Rhodes et al., 2004), suggesting that they are also generated at higher levels of the visual processing hierarchy (Watson & Clifford, 2003). Typically, low-level inheritance effects are minimized using techniques such as size and position changes between adaptor and test stimuli and allowing free eye movements (Benton, Jennings, & Chatting, 2006; Jeffery, Rhodes, & Busey, 2006). 
These techniques may not, however, be enough to prevent inherited tilt adaptation from contributing to the measurement of expression aftereffects. Although tilt adaptation is retinotopic, it is also built up rapidly across eye movements, leading to the distribution of the tilt aftereffect across the visual field over time in what is termed a tilt aftereffect field (Blakemore & Over, 1974; Dickinson & Badcock, 2013). Most observers fixate on specific regions of the face in a highly predictable pattern (Caldara et al., 2005; Rutherford & Towns, 2008), making it very likely that tilt aftereffect fields develop in a systematic way across the retina even when there is free viewing of a face. Local orientation tuning functions are narrow (Thomas & Gille, 1979), making it possible that multiple tilt aftereffect fields could coexist over the same area of the visual field. Taken together, these properties show the possibility that low-level tilt adaptation may produce aftereffects that imitate size and position invariance. For these reasons, Dickinson and colleagues have argued that current low-level control techniques may not completely remove the influence of low-level aftereffects on measures of expression adaptation (Dickinson, Almeida, Bell, & Badcock, 2010; Dickinson & Badcock, 2013; Dickinson et al., 2012). 
Given that tilt adaptation can induce aftereffects that bias the perceived expression of a face, and that tilt adaptation may potentially show a degree of size and position invariance, we cannot rule out the possibility that expression aftereffects can be fully explained by tilt adaptation. This is a concern for current theory, because previous studies that have used expression aftereffects to probe the nature of expression representation have relied on the assumption that those aftereffects reveal the action of high-level, face-selective mechanisms (Bestelmeyer et al., 2010; Burton et al., 2015; Burton et al., 2013; Rhodes et al., 2015; Skinner & Benton, 2010). In the face of this concern, it is important to establish whether or not the expression aftereffect reflects anything beyond the action of low-level tilt adaptation, and to identify any face-selective contribution that might exist. 
The present study
Here we sought to determine whether tilt adaptation could fully explain expression aftereffects, and whether these expression aftereffects remain when low-level contributions including tilt are minimized. If they do, then it would demonstrate that they do not only consist of low-level adaptation, but include mid-level and high-level adaptation as well, which would provide support for their widespread use in the study of those mechanisms. 
To investigate the influence of tilt adaptation on expression aftereffects, we used an expression adaptation paradigm with three conditions (Figure 1). In the aligned condition, the adaptor and test stimuli have the same orientation (45° clockwise). In this condition, the aftereffect will reflect face adaptation, but will also contain any contribution of tilt adaptation and other retinotopic adaptation components. In the misaligned condition, we changed the orientation of the adaptor to a different orientation than the test face (45° counterclockwise, 45° clockwise, respectively). This rotation retinotopically displaces the test face relative to the adaptor, minimizing contributions from low-level, retinotopic adaptation (Afraz & Cavanagh, 2009; Rhodes, Jeffery, Watson, Clifford, & Nakayama, 2003). Importantly, it also changes the orientation of the features within the visual field, such that any tilt adaptation field produced by the adaptor should not be expected to have any meaningful effect on the perception of the test face (Figure 2). The misaligned condition does not, however, rule out contributions of mid-level shape, and/or higher-level non-face (object), adaptation. As a more exploratory goal, we also sought to determine whether there is some contribution from face-selective adaptation. Therefore, we included a third condition, the misaligned-inverted condition, where the adapting face was not only misaligned from the test orientation, but also inverted (i.e., orientation at 135° clockwise; Figure 1). The orientation difference between the adapt and test stimuli in this condition is the same as the misaligned condition (90°). The primary difference between these two conditions is that face-processing mechanisms are poorly engaged when the face is inverted (Rhodes et al., 2003; Sergent, 1984; Valentine, 1988; Valentine & Bruce, 1986; Yin, 1969; Yovel & Kanwisher, 2005). Thus, the difference between aftereffects in the misaligned and misaligned-inverted conditions is expected to reflect some contribution from face-selective adaptation. 
Figure 1
 
Examples of the three orientation conditions. The left face is the adaptor and the right face is the test face. In the aligned condition, the adaptor has an orientation of 45° clockwise and the test face also has an orientation of 45° clockwise. In the misaligned condition, the adaptor has an orientation of 45° counterclockwise and the test face has an orientation of 45° clockwise. In the misaligned-inverted condition, the adaptor has an orientation of 135° clockwise and test face has an orientation of 45° clockwise. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 1
 
Examples of the three orientation conditions. The left face is the adaptor and the right face is the test face. In the aligned condition, the adaptor has an orientation of 45° clockwise and the test face also has an orientation of 45° clockwise. In the misaligned condition, the adaptor has an orientation of 45° counterclockwise and the test face has an orientation of 45° clockwise. In the misaligned-inverted condition, the adaptor has an orientation of 135° clockwise and test face has an orientation of 45° clockwise. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 2
 
The aligned condition allows for the influence of tilt adaptation, as the face has the same orientation in both the adapt and test phases. In the misaligned condition, the tilt adaptation built up in the adaptation phase (indicated by the transparent overlay) will be in a different position relative to the face in the test phase, and there should therefore be minimal meaningful contribution to the expression aftereffect. In the misaligned-inverted condition, there is the same amount of angular difference between the adapt and test faces as the misaligned condition, and the effect of tilt adaptation will therefore be minimized to the same extent. However, in this condition, the adaptor is closer to inverted than upright, so face-selective processing should also be disrupted. Therefore, if the misaligned-inverted condition produces a weaker aftereffect than the misaligned condition, this difference could potentially be attributed to a decrease in face-selective adaptation.
Figure 2
 
The aligned condition allows for the influence of tilt adaptation, as the face has the same orientation in both the adapt and test phases. In the misaligned condition, the tilt adaptation built up in the adaptation phase (indicated by the transparent overlay) will be in a different position relative to the face in the test phase, and there should therefore be minimal meaningful contribution to the expression aftereffect. In the misaligned-inverted condition, there is the same amount of angular difference between the adapt and test faces as the misaligned condition, and the effect of tilt adaptation will therefore be minimized to the same extent. However, in this condition, the adaptor is closer to inverted than upright, so face-selective processing should also be disrupted. Therefore, if the misaligned-inverted condition produces a weaker aftereffect than the misaligned condition, this difference could potentially be attributed to a decrease in face-selective adaptation.
To summarize, we will measure expression aftereffects while minimizing the contribution of tilt adaptation, and determine whether tilt adaptation can in fact fully account for expression aftereffects. We will also determine whether expression aftereffects reflect adaptation of higher-level, face-selective mechanisms, rather than only adaptation of more general (non-face) higher-level processing mechanisms. Given that expression aftereffects can be generated through adaptation to low-level stimuli, we expect to find a significant contribution of tilt adaptation and other low-level adaptation indicated by larger aftereffects in the aligned than the misaligned condition. Critically, if tilt adaptation fully accounts for expression aftereffects, then the aftereffect should disappear in the misaligned condition. If they do not, it would indicate that expression aftereffects reflect higher-level adaptation. Finally, if expression aftereffects are reduced in the misaligned-inverted (relative to the misaligned) condition, where face-selective processing should be disrupted, it would provide evidence that is consistent with the adaptation of face-selective mechanisms being involved in expression aftereffects. 
Method
Participants
Participants were recruited from The University of Western Australia, School of Psychology first-year participant pool in exchange for course credit (N = 81), and from the wider UWA community (N = 15). As the face stimuli used in the tasks were of Caucasian faces, only Caucasian participants were recruited to avoid potential other-race effects (Meissner & Brigham, 2001). There were a total of 96 participants (26 men, 70 women) and an age range of 17 to 40 (M = 21 years, 6 months, SD = 5 years, 4 months). One participant's data was removed due to missing data. There were no outliers. 
Stimuli
Stimuli from Skinner and Benton (2010) were used for the adaptation task. The face stimuli were constructed from 25 male and 25 female faces, each displaying each of the six basic expressions (anger, disgust, fear, happiness, sadness, and surprise), as well as a neutral expression. These faces were averaged using morphing software to create a sex- and identity-neutral average for each expression. Morphing together these seven sex- and identity-neutral averages produced an ambiguous expression average that was used as the test face. For adaptors, we used anti-expressions. Expressions can be thought of as lying on one end of a continuum, with the ambiguous expression face in the middle, and the corresponding anti-expression on the opposite end, differing from the ambiguous expression to the same extent but in the opposite direction from the original expression (Figure 3). The benefit of anti-expression adaptors is that adaptation to an anti-expression produces an aftereffect that resembles the corresponding expression, a familiar percept that participants can easily identify (Burton, Jeffery, Bonner, & Rhodes, 2016; Burton et al., 2015; Butler, Oruc, Fox, & Barton, 2008; Skinner & Benton, 2010). Anti-expressions were created by morphing the facial features of each expression face towards the opposite end of its expression continuum. 
Figure 3
 
Stimuli from Skinner and Benton (2010), arranged in an expression “face-space.” The original expression (here, happy) lies on one end of a continuum, with the ambiguous expression face in the middle, and the corresponding anti-expression (here, anti-happy) on the opposite end, differing from the ambiguous expression to the same extent but in the opposite direction from the original expression. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 3
 
Stimuli from Skinner and Benton (2010), arranged in an expression “face-space.” The original expression (here, happy) lies on one end of a continuum, with the ambiguous expression face in the middle, and the corresponding anti-expression (here, anti-happy) on the opposite end, differing from the ambiguous expression to the same extent but in the opposite direction from the original expression. The face stimuli are computer-generated morphs, and do not depict real people.
The anti-expressions at 100% strength were used as adaptors, and the ambiguous face as the test stimuli. Expression faces at 80% strength were also used as test faces in catch trials to ensure participants were not responding randomly, and also to maintain motivation by providing some easily identified expressions. The expression, anti-expression, and ambiguous face stimuli are presented in Figure 4. Depending on the condition, adaptor stimuli either had an orientation of 45° clockwise, 45° counterclockwise, or 135°, with test stimuli at a constant orientation of 45° clockwise (Figure 1). All face stimuli were presented in grayscale. 
Figure 4
 
Example stimuli from Skinner and Benton (2010). Top row shows original expression faces, and bottom row shows the corresponding anti-expression faces. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 4
 
Example stimuli from Skinner and Benton (2010). Top row shows original expression faces, and bottom row shows the corresponding anti-expression faces. The face stimuli are computer-generated morphs, and do not depict real people.
Procedure
Participants completed the adaptation task on a computer. A headrest was used to ensure that participants kept their heads upright throughout the experiment, and to maintain a viewing distance of 48 cm. Stimuli were presented at a visual angle of approximately 13° by 9.5°. The task took approximately 50 min to complete. 
On each trial, participants were shown an adapting stimulus for 8 s. Next, the test stimulus was shown for 200 ms, followed by the instruction to judge which of the six basic expressions the test face displayed using the numbered keyboard keys (1 to 6). The durations of the adaptor and test stimuli were based on the findings of Burton et al. (2016), who found that 8 s of adaptation time and 200 ms of test time produced strong aftereffects. The adaptation task consisted of three orientation conditions, with six expressions for each condition and 12 repetitions for each expression, plus 36 control expressions, leading to a total of 252 trials. The trials were divided into six blocks, with equal numbers of each anti-expression and catch trials in each block. The trials were self-paced, with a break in between each block (see Figure 5 for example trial). To reduce retinotopic adaptation free eye movement was allowed. 
Figure 5
 
Example trial of the expression adaptation task. Anti-expression faces are shown in the adaptation phase, followed by the ambiguous face in the test phase, after which a response must be made, and one of the six emotion words shown on the screen must be chosen.
Figure 5
 
Example trial of the expression adaptation task. Anti-expression faces are shown in the adaptation phase, followed by the ambiguous face in the test phase, after which a response must be made, and one of the six emotion words shown on the screen must be chosen.
Results
Aftereffect size was calculated as the percentage of “corresponding” responses (responded with the corresponding emotion after presentation of the anti-expression; e.g., responding with happy after adapting to an anti-happy face). Mean aftereffect sizes across conditions are presented in Figure 6
Figure 6
 
Mean aftereffect sizes across orientation condition. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Individual data points are shown for each condition. Error bars indicate ±1 standard error of the mean. The dotted line represents chance level (16.67%).
Figure 6
 
Mean aftereffect sizes across orientation condition. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Individual data points are shown for each condition. Error bars indicate ±1 standard error of the mean. The dotted line represents chance level (16.67%).
To test the possibility that the effects observed above was driven by only one or two of the expression conditions, we ran a two-way repeated-measures ANOVA to examine the effect of orientation condition and expression on aftereffect size. Degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity for all effects. The main effect of orientation of the adapting faces had a significant impact on the size of the expression aftereffects, F(1.49, 141.26) = 373.79, p < 0.01, ηp2 = 0.80. The main effect of expression category of the adapting faces had a significant impact on the size of the expression aftereffects, F(4.20, 399.23) = 19.70, p < 0.01, ηp2 = 0.17. The interaction satisfied the assumption of sphericity χ2 (54) = 56.88, p = 0.37, and was significant F(10, 950) = 24.03, p < 0.01, ηp2 = 0.20. Despite this interaction, Figure 7 shows that the expected pattern appears to hold for all expressions. 
Figure 7
 
Mean aftereffect sizes across orientation condition for each of the six expressions. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Error bars indicate ±1 standard error of the mean.
Figure 7
 
Mean aftereffect sizes across orientation condition for each of the six expressions. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Error bars indicate ±1 standard error of the mean.
To test whether this pattern remained significant at the level of individual expressions, we conducted paired samples t tests comparing aftereffects in the misaligned condition and misaligned-inverted conditions for each expression. There were significant differences between aligned and misaligned conditions for anger, t(95) = 9.69, p < 0.01; disgust, t(95) = 8.18, p < 0.01; fear, t(95) = 4.27, p < 0.01; happy, t(95) = 16.86, p < 0.01; sad, t(95) = 5.23, p < 0.01; and surprise t(95) = 9.35, p < 0.01. There were also significant differences between the misaligned and misaligned-inverted conditions for anger, t(95) = 5.07, p < 0.01; disgust, t(95) = 2.34, p < 0.05; happy, t(95) = 5.26 p < 0.01; sad, t(95) = 2.94, p < 0.01; and surprise t(95) = 4.45, p < 0.01. However, there was no significant difference for fear t(95) = 1.89, p = 0.06. 
When analyzing the data collapsed across expression, we compared aftereffects in the misaligned condition to chance to determine whether a significant aftereffect remained once contributions from tilt and other low-level adaptation had been minimized. One sample t tests also showed that aftereffects in the misaligned condition were significantly greater than chance level (16.67%) for all expressions except anger (all t > 2.86, all p < 0.005). 
A one-way repeated-measures ANOVA was conducted to examine the effect of the orientation condition on aftereffect size. Mauchly's test of sphericity showed that the assumption of sphericity was violated, χ2 (2) = 39.98, p < 0.01, so degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity. The orientation of the adapting faces had a significant impact on the size of the expression aftereffects, F(1.48, 141.12) = 361.71, p < 0.01, ηp2 = 0.79. To determine whether there was a significant contribution of tilt adaptation in expression aftereffects, a planned comparison compared the aligned and misaligned conditions. The aftereffects were significantly smaller in the misaligned than the aligned condition, F(1, 95) = 483.01, p < 0.001, indicating that tilt adaptation contributes to the expression aftereffect. 
To determine whether tilt adaptation could fully explain expression aftereffects, we conducted a one-sample t test comparing aftereffects in the misaligned condition (M = 33.9, SD = 7.7) to chance (16.67%). Aftereffects in this condition were significantly different from chance performance, t(95) = 21.99, p < 0.01, indicating that expression aftereffects still remain even after minimizing the contribution of tilt adaptation. To test whether there was some face-selective component in expression aftereffects, a second planned comparison was conducted to compare aftereffects in the misaligned and the misaligned-inverted conditions. The aftereffects were significantly smaller in the misaligned-inverted than the misaligned condition, F(1, 95) = 92.15, p < 0.001. 
Discussion
We aimed to investigate whether expression aftereffects can be fully explained by the inheritance of tilt adaptation through the visual hierarchy, or whether there is also a contribution of high-level face adaptation. We found that significant expression aftereffects remained after minimizing the influence of tilt, indicating that tilt adaptation does not fully account for expression aftereffects, and preliminary evidence that is consistent with a high-level, face-selective adaptation component in expression aftereffects. 
Consistent with previous findings (Fox & Barton, 2007; Webster, Kaping, Mizokami, & Duhamel, 2004), after adapting to an anti-expression, participants were perceptually biased away from the adapting expression. As expected, minimizing the influence of tilt adaptation significantly reduced the size of the expression aftereffect, indicating a significant contribution of tilt adaptation in expression aftereffects. Indeed, Dickinson et al. (2012) have shown that it is possible to generate “expression” aftereffects simply by adapting to tilted lines. The present results show that there is a significant contribution of low-level, retinotopic adaptation in expression aftereffects generated by adaptation to realistic face adaptors. The finding that the aligned condition produced stronger aftereffects than the misaligned condition is similar to the findings of Afraz and Cavanagh (2009), who showed that gender aftereffects were stronger when the adapting face mapped retinotopically to the test face compared to when the retinotopic coordinates of the test face were rotated 90°. 
Critically, we found that the expression aftereffects were still significantly greater than chance in the misaligned condition, indicating that the expression aftereffects remain even after minimizing the influence of tilt. This finding indicates that tilt adaptation does not fully explain expression aftereffects, which, to our knowledge, has not previously been demonstrated before. This builds upon similar findings in face gender adaptation (Afraz & Cavanagh, 2008) and face distortion adaptation (Rhodes et al., 2003), where aftereffects were also found to survive orientation changes between adaptor and test face stimuli. It is interesting that we find a strong aftereffect using tilted faces, as recent research has shown that holistic coding is weaker in rotated faces compared to upright faces (Rosenthal, Levakov, & Avidan, 2017). Indeed, the aftereffect found here is likely an underestimation of the actual expression aftereffect. 
Furthermore, we found that there was a significant difference between the misaligned and misaligned-inverted conditions, which provides strong evidence for a face-selective component in the expression aftereffects. Since the main difference between these two conditions was the inversion of the adapting face, the difference between the two aftereffects can be primarily attributed to the disruption of face-selective processing (Rhodes, Brake, & Atkinson, 1993; Rossion & Gauthier, 2002; Yovel & Kanwisher, 2005). These results support previous findings indicating the involvement of high-level, face-selective mechanisms in expression, and other face aftereffects: for example, the positive relationship between expression aftereffects and expression recognition ability (Palermo et al., 2017). 
We note that the difference between the aftereffects in the misaligned and misaligned-inverted condition could also be influenced by differences in participant gaze patterns between the upright and inverted faces. People tend to fixate on the mouth to a greater extent when viewing inverted faces than upright faces while fixating more on the eyes when viewing upright faces than inverted faces, which would cause a shift in the retinal coordinates when viewing the test face compared to the adaptor face (Davidenko, Kopalle, & Bridgeman, 2019; Xu & Tanaka, 2013). However, people also tend to fixate on different parts of the face when viewing faces rotated 45° clockwise faces compared to faces rotated 45° counterclockwise faces (Davidenko et al., 2019). Therefore, an effect of mismatch in fixation and gaze patterns between adaptor and test can be expected to be present in the aftereffect measured in the misaligned condition, as well as in the misaligned-inverted condition. Thus, when comparing the 135° adaptor and the 45° adaptor, the primary difference between conditions is the (near-) inversion of the adaptor, resulting in weaker face-specific coding of the 135° face. We believe that it is plausible that the difference between the misaligned and misaligned-inverted conditions reflects high-level face selective processes lost through the inversion, but nevertheless, cannot rule out the effects of fixation differences. It would be possible to test this alternative explanation in future studies using eye tracking to determine the extent of fixation differences between conditions and forced fixation to determine whether the pattern of results remains the same when fixation is controlled. 
The extent to which adaptation at various levels of the visual hierarchy contributes to a measured expression aftereffect will of course depend on the conditions in which that aftereffect is generated. The contribution of tilt adaptation to a particular measurement of the expression aftereffect may therefore be larger or smaller than that measured here, depending on the similarity of the two methods. For instance, in the present study we used adapt and test stimuli with the same identity: this is likely to result in a greater influence of tilt adaptation than where adapt and test stimuli are of different identities, since the features of the faces are closer to one another in contour and position. Similarly, we did not employ a size change between adapt and test, and although tilt adaptation may be able to survive such a change, it may nevertheless have a reduced impact under these conditions. Therefore, while we have shown that tilt adaptation can play a role in the generation of an expression aftereffect from realistic face stimuli, it is difficult to determine to what extent the contribution of tilt explains the aftereffects measured in previous studies. Our findings do, however, reinforce the importance of considering the multiple possible sources of an aftereffect within the visual hierarchy. When it is important to minimize lower-level contributions to a face aftereffect, the orientation change implemented in this study should be considered as a method of disrupting the inherited effect of tilt adaptation. 
In conclusion, although we found some evidence that tilt adaptation can contribute to expression aftereffects, low-level adaptation did not account for the entire aftereffect: there was still a significant expression aftereffect even after the contribution of tilt and other low-level adaptation had been minimized. There was also preliminary evidence consistent with a contribution from high-level, face-selective adaptation in expression aftereffects. Taken together, these findings support the widespread use of expression aftereffects to inform our understanding of expression coding and processing. These results emphasize that expression aftereffects contain contributions from multiple levels of the visual hierarchy. 
Acknowledgments
This research was supported by the Australian Research Council (ARC) Centre of Excellence in Cognition and its Disorders (CE110001021) and an ARC Discovery Outstanding Researcher Award to Rhodes (DP130102300). We thank Andrew Skinner and Chris Benton for providing the stimuli, and Edwin Dickinson for his valuable contribution to the experimental design. Ethical approval was granted by the Human Research Ethics Committee of the University of Western Australia. 
Commercial relationships: none. 
Corresponding author: Derek Swe. 
Address: The University of Western Australia, School of Psychological Science, Perth, Australia. 
References
Adams, R. B., Ambady, N., Macrae, C. N., & Kleck, R. E. (2006). Emotional expressions forecast approach–avoidance behavior. Motivation and Emotion, 30 (2), 177–186, https://doi.org/10.1007/s11031-006-9020-2.
Afraz, A., & Cavanagh, P. (2008). Retinotopy of the face aftereffect. Vision Research, 48 (1), 42–54, https://doi.org/10.1016/j.visres.2007.10.028.
Afraz, A., & Cavanagh, P. (2009). The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations. Journal of Vision, 9 (10): 10, 1–17, https://doi.org/10.1167/9.10.10. [PubMed] [Article]
Benton, C. P. (2009). Effect of photographic negation on face expression aftereffects. Perception, 38 (9), 1267–1274, https://doi.org/10.1068/p6468.
Benton, C. P., Jennings, S. J., & Chatting, D. J. (2006). Viewpoint dependence in adaptation to facial identity. Vision Research, 46 (20), 3313–3325.
Bestelmeyer, P. E., Jones, B. C., DeBruine, L. M., Little, A., & Welling, L. L. (2010). Face aftereffects suggest interdependent processing of expression and sex and of expression and race. Visual Cognition, 18 (2), 255–274, https://doi.org/10.1080/13506280802708024.
Blakemore, C., & Over, R. (1974). Curvature detectors in human vision? Perception, 3 (1), 3–7, https://doi.org/10.1068/p030003.
Burton, N., Jeffery, L., Bonner, J., & Rhodes, G. (2016). The timecourse of expression aftereffects. Journal of Vision, 16 (15): 1, 1–12, https://doi.org/10.1167/16.15.1. [PubMed] [Article]
Burton, N., Jeffery, L., Calder, A. J., & Rhodes, G. (2015). How is facial expression coded? Journal of Vision, 15 (1): 1, 1–13, https://doi.org/10.1167/15.1.1. [PubMed] [Article]
Burton, N., Jeffery, L., Skinner, A. L., Benton, C. P., & Rhodes, G. (2013). Nine-year-old children use norm-based coding to visually represent facial expression. Journal of Experimental Psychology: Human Perception and Performance, 39 (5), 1261–1269, https://doi.org/10.1037/a0031117.
Butler, A., Oruc, I., Fox, C. J., & Barton, J. J. (2008). Factors contributing to the adaptation aftereffects of facial expression. Brain Research, 1191, 116–126, https://doi.org/10.1016/j.brainres.2007.10.101.
Caldara, R., Schyns, P., Mayer, E., Smith, M. L., Gosselin, F., & Rossion, B. (2005). Does prosopagnosia take the eyes out of face representations? Evidence for a defect in representing diagnostic facial information following brain damage. Journal of Cognitive Neuroscience, 17 (10), 1652–1666, https://doi.org/10.1162/089892905774597254.
Davidenko, N., Kopalle, H., & Bridgeman, B. (2019). The upper eye bias: Rotated faces draw fixations to the upper eye. Perception, 48 (2), 162–174.
Dickinson, J. E., Almeida, R. A., Bell, J., & Badcock, D. R. (2010). Global shape aftereffects have a local substrate: A tilt aftereffect field. Journal of Vision, 10 (13): 5, 1–12, https://doi.org/10.1167/10.13.5. [PubMed] [Article]
Dickinson, J. E., & Badcock, D. R. (2013). On the hierarchical inheritance of aftereffects in the visual system. Frontiers in Psychology, 4, 472.
Dickinson, J. E., Mighall, H. K., Almeida, R. A., Bell, J., & Badcock, D. R. (2012). Rapidly acquired shape and face aftereffects are retinotopic and local in origin. Vision Research, 65, 1–11.
Fox, C. J., & Barton, J. J. (2007). What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Research, 1127, 80–89, https://doi.org/10.1016/j.brainres.2006.09.104.
Horstmann, G. (2003). What do facial expressions convey: Feeling states, behavioral intentions, or actions requests? Emotion, 3 (2), 150–166, https://doi.org/10.1037/1528-3542.3.2.150.
Jeffery, L., Rhodes, G., & Busey, T. (2006). View-specific coding of face shape. Psychological Science, 17 (6), 501–505.
Knapen, T., Rolfs, M., Wexler, M., & Cavanagh, P. (2009). The reference frame of the tilt aftereffect. Journal of Vision, 10 (1): 8, 1–13, https://doi.org/10.1167/10.1.8. [PubMed] [Article]
Kornreich, C., Philippot, P., Foisy, M.-L., Blairy, S., Raynaud, E., Dan, B.,… Verbanck, P. (2002). Impaired emotional facial expression recognition is associated with interpersonal problems in alcoholism. Alcohol and Alcoholism, 37 (4), 394–400, https://doi.org/10.1093/alcalc/37.4.394.
Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4 (1), 89–94.
Marsh, A. A., Ambady, N., & Kleck, R. E. (2005). The effects of fear and anger facial expressions on approach- and avoidance-related behaviors. Emotion, 5 (1), 119–124, https://doi.org/10.1037/1528-3542.5.1.119.
Meissner, C. A., & Brigham, J. C. (2001). Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. Psychology, Public Policy, and Law, 7 (1), 3–35, https://doi.org/10.1037//1076-8971.7.1.3.
Palermo, R., Jeffery, L., Lewandowsky, J., Fiorentini, C., Irons, J., Dawel, A.,… Rhodes, G. (2017). Adaptive face coding contributes to individual differences in facial expression recognition independently of affective factors. Journal of Experimental Psychology: Human Perception and Performance. 44 (4), 503–517, https://doi.org/10.1037/xhp0000463.
Rhodes, G., Brake, S., & Atkinson, A. P. (1993). What's lost in inverted faces? Cognition, 47 (1), 25–57, https://doi.org/10.1016/0010-0277(93)90061-Y.
Rhodes, G., Jeffery, L., Watson, T. L., Clifford, C. W., & Nakayama, K. (2003). Fitting the mind to the world: Face adaptation and attractiveness aftereffects. Psychological Science, 14 (6), 558–566.
Rhodes, G., Jeffery, L., Watson, T. L., Jaquet, E., Winkler, C., & Clifford, C. W. (2004). Orientation-contingent face aftereffects and implications for face-coding mechanisms. Current Biology, 14 (23), 2119–2123, https://doi.org/10.1016/j.cub.2004.11.053.
Rhodes, G., Pond, S., Burton, N., Kloth, N., Jeffery, L., Bell, J.,… Palermo, R. (2015). How distinct is the coding of face identity and expression? Evidence for some common dimensions in face space. Cognition, 142, 123–137.
Riggio, R. E. (2014). Social interaction skills and nonverbal behavior. In Feldman R. S. (Ed.), Application of nonverbal behavioral theories and research (pp. 3–31). New York: Psychology Press. Retrieved from https://books.google.com.au/books?id=D0vrAgAAQBAJ.
Rosenthal, G., Levakov, G., & Avidan, G. (2017). Holistic face representation is highly orientation-specific. Psychonomic Bulletin & Review, 25, 1351–1357.
Rossion, B., & Gauthier, I. (2002). How does the brain process upright and inverted faces? Behavioral and Cognitive Neuroscience Reviews, 1 (1), 63–75, https://doi.org/10.1177/1534582302001001004.
Rutherford, M. D., & Towns, A. M. (2008). Scan path differences and similarities during emotion perception in those with and without autism spectrum disorders. Journal of Autism and Developmental Disorders, 38 (7), 1371–1381, https://doi.org/10.1007/s10803-007-0525-7.
Seidel, E.-M., Habel, U., Kirschner, M., Gur, R. C., & Derntl, B. (2010). The impact of facial emotional expressions on behavioral tendencies in women and men. Journal of Experimental Psychology: Human Perception and Performance, 36 (2), 500–507, https://doi.org/10.1037/a0018169.
Sergent, J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75 (2), 221–242, https://doi.org/10.1111/j.2044-8295.1984.tb01895.x.
Skinner, A. L., & Benton, C. P. (2010). Anti-expression aftereffects reveal prototype-referenced coding of facial expressions. Psychological Science, 21 (9), 1248–1253.
Thomas, J. P., & Gille, J. (1979). Bandwidths of orientation channels in human vision. Journal of the Optical Society of America, 69 (5), 652–660, https://doi.org/10.1364/JOSA.69.000652.
Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79 (4), 471–491, https://doi.org/10.1111/j.2044-8295.1988.tb02747.x.
Valentine, T., & Bruce, V. (1986). The effect of race, inversion and encoding activity upon face recognition. Acta Psychologica, 61 (3), 259–273, https://doi.org/10.1016/0001-6918(86)90085-5.
Watson, T. L., & Clifford, C. W. (2003). Pulling faces: An investigation of the face-distortion aftereffect. Perception, 32 (9), 1109–1116, https://doi.org/10.1068/p5082.
Webster, M. A. (2015). Visual adaptation. Annual Review of Vision Science, 1, 547–567, https://doi.org/10.1146/annurev-vision-082114-035509.
Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004, April 1). Adaptation to natural facial categories. Nature, 428 (6982), 557–561, https://doi.org/10.1038/nature02420.
Webster, M. A., & MacLeod, D. I. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 366 (1571), 1702–1725, https://doi.org/10.1098/rstb.2010.0360.
Xu, B., & Tanaka, J. W. (2013). Does face inversion qualitatively change face processing: An eye movement study using a face change detection task. Journal of Vision, 13 (2): 22, 1–16, https://doi.org/10.1167/13.2.22. [PubMed] [Article]
Xu, H., Dayan, P., Lipkin, R. M., & Qian, N. (2008). Adaptation across the cortical hierarchy: Low-level curve adaptation affects high-level facial-expression judgments. Journal of Neuroscience, 28 (13), 3374–3383, https://doi.org/10.1523/JNEUROSCI.0182-08.2008.
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81 (1), 141–145, https://psycnet.apa.org/doi/10.1037/h0027474.
Yovel, G., & Kanwisher, N. (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15 (24), 2256–2262, https://doi.org/10.1016/j.cub.2005.10.072.
Figure 1
 
Examples of the three orientation conditions. The left face is the adaptor and the right face is the test face. In the aligned condition, the adaptor has an orientation of 45° clockwise and the test face also has an orientation of 45° clockwise. In the misaligned condition, the adaptor has an orientation of 45° counterclockwise and the test face has an orientation of 45° clockwise. In the misaligned-inverted condition, the adaptor has an orientation of 135° clockwise and test face has an orientation of 45° clockwise. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 1
 
Examples of the three orientation conditions. The left face is the adaptor and the right face is the test face. In the aligned condition, the adaptor has an orientation of 45° clockwise and the test face also has an orientation of 45° clockwise. In the misaligned condition, the adaptor has an orientation of 45° counterclockwise and the test face has an orientation of 45° clockwise. In the misaligned-inverted condition, the adaptor has an orientation of 135° clockwise and test face has an orientation of 45° clockwise. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 2
 
The aligned condition allows for the influence of tilt adaptation, as the face has the same orientation in both the adapt and test phases. In the misaligned condition, the tilt adaptation built up in the adaptation phase (indicated by the transparent overlay) will be in a different position relative to the face in the test phase, and there should therefore be minimal meaningful contribution to the expression aftereffect. In the misaligned-inverted condition, there is the same amount of angular difference between the adapt and test faces as the misaligned condition, and the effect of tilt adaptation will therefore be minimized to the same extent. However, in this condition, the adaptor is closer to inverted than upright, so face-selective processing should also be disrupted. Therefore, if the misaligned-inverted condition produces a weaker aftereffect than the misaligned condition, this difference could potentially be attributed to a decrease in face-selective adaptation.
Figure 2
 
The aligned condition allows for the influence of tilt adaptation, as the face has the same orientation in both the adapt and test phases. In the misaligned condition, the tilt adaptation built up in the adaptation phase (indicated by the transparent overlay) will be in a different position relative to the face in the test phase, and there should therefore be minimal meaningful contribution to the expression aftereffect. In the misaligned-inverted condition, there is the same amount of angular difference between the adapt and test faces as the misaligned condition, and the effect of tilt adaptation will therefore be minimized to the same extent. However, in this condition, the adaptor is closer to inverted than upright, so face-selective processing should also be disrupted. Therefore, if the misaligned-inverted condition produces a weaker aftereffect than the misaligned condition, this difference could potentially be attributed to a decrease in face-selective adaptation.
Figure 3
 
Stimuli from Skinner and Benton (2010), arranged in an expression “face-space.” The original expression (here, happy) lies on one end of a continuum, with the ambiguous expression face in the middle, and the corresponding anti-expression (here, anti-happy) on the opposite end, differing from the ambiguous expression to the same extent but in the opposite direction from the original expression. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 3
 
Stimuli from Skinner and Benton (2010), arranged in an expression “face-space.” The original expression (here, happy) lies on one end of a continuum, with the ambiguous expression face in the middle, and the corresponding anti-expression (here, anti-happy) on the opposite end, differing from the ambiguous expression to the same extent but in the opposite direction from the original expression. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 4
 
Example stimuli from Skinner and Benton (2010). Top row shows original expression faces, and bottom row shows the corresponding anti-expression faces. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 4
 
Example stimuli from Skinner and Benton (2010). Top row shows original expression faces, and bottom row shows the corresponding anti-expression faces. The face stimuli are computer-generated morphs, and do not depict real people.
Figure 5
 
Example trial of the expression adaptation task. Anti-expression faces are shown in the adaptation phase, followed by the ambiguous face in the test phase, after which a response must be made, and one of the six emotion words shown on the screen must be chosen.
Figure 5
 
Example trial of the expression adaptation task. Anti-expression faces are shown in the adaptation phase, followed by the ambiguous face in the test phase, after which a response must be made, and one of the six emotion words shown on the screen must be chosen.
Figure 6
 
Mean aftereffect sizes across orientation condition. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Individual data points are shown for each condition. Error bars indicate ±1 standard error of the mean. The dotted line represents chance level (16.67%).
Figure 6
 
Mean aftereffect sizes across orientation condition. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Individual data points are shown for each condition. Error bars indicate ±1 standard error of the mean. The dotted line represents chance level (16.67%).
Figure 7
 
Mean aftereffect sizes across orientation condition for each of the six expressions. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Error bars indicate ±1 standard error of the mean.
Figure 7
 
Mean aftereffect sizes across orientation condition for each of the six expressions. Aftereffect size calculated as the proportion of responses opposite to the adaptor. Error bars indicate ±1 standard error of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×