December 2018
Volume 18, Issue 13
Open Access
Article  |   December 2018
Adaptation to dynamic faces produces face identity aftereffects
Author Affiliations
Journal of Vision December 2018, Vol.18, 13. doi:https://doi.org/10.1167/18.13.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samantha Petrovski, Gillian Rhodes, Linda Jeffery; Adaptation to dynamic faces produces face identity aftereffects. Journal of Vision 2018;18(13):13. https://doi.org/10.1167/18.13.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Face aftereffects are well established for static stimuli and have been used extensively as a tool for understanding the neural mechanisms underlying face recognition. It has also been argued that adaptive coding, as demonstrated by face aftereffects, plays a functional role in face recognition by calibrating our face norms to reflect current experience. If aftereffects tap high-level perceptual mechanisms that are critically involved in everyday face recognition then they should also occur for moving faces. Here we asked whether face identity aftereffects can be induced using dynamic adaptors. The face identity aftereffect occurs when adaptation to a particular identity (e.g., Dan) biases subsequent perception toward the opposite identity (e.g., antiDan). We adapted participants to video of real faces that displayed either rigid, non-rigid, or no motion and tested for aftereffects in static antifaces. Adapt and test stimuli differed in size, to minimize low-level adaptation. Aftereffects were found in all conditions, suggesting that face identity aftereffects tap high-level mechanisms important for face recognition. Aftereffects were not significantly reduced in the motion conditions relative to the static condition. Overall, our results support the view that face aftereffects reflect adaptation of high-level mechanisms important for real-world face recognition in which faces are moving.

Introduction
Adaptation is a powerful tool that has been used to investigate the neural mechanisms underlying many aspects of visual perception, such as motion and color (Thompson & Burr, 2009). The motion aftereffect, for example, occurs when prolonged exposure (adaptation) to downward motion creates a temporary illusion that subsequently viewed, stationary objects are moving upward, revealing that motion direction is signaled by relative activation of neural pools tuned to different directions of motion (Mather, Verstraten, & Anstis, 1998). In addition to being a useful tool, adaptation is of considerable interest because it is argued to play a functional role in perception, for example by calibrating neural mechanisms to match the range of current inputs and to enhance sensitivity to variation around the adapted state (Barlow, 1990; Clifford, Wenderoth, & Spehar, 2000; Rhodes & Leopold, 2011; Webster, 2015). 
The discovery of face aftereffects (Leopold, O'Toole, Vetter, & Blanz, 2001; MacLin & Webster, 2001; O'Leary & McMahon, 1991; Webster & MacLin, 1999) provided a new tool (aftereffects) for probing the coding schemes underlying face perception and a mechanism (adaptation) that might help explain our remarkable ability to discriminate such similar visual patterns. For example, the face identity aftereffect, in which exposure to an individual face biases perception toward an opposite identity face (see Figure 1), provides support for a norm-based coding scheme, in which each face is coded as a deviation from the average face (norm) in a multidimensional space (Anderson & Wilson, 2005; Leopold et al., 2001; Rhodes & Jeffery, 2006). Aftereffect techniques have been used extensively to enrich our understanding of face coding (for reviews, see Rhodes & Leopold, 2011; Webster & MacLeod, 2011). In addition, evidence is accumulating that adaptation itself plays a functional role in our face perception abilities (Dennett, McKone, Edwards, & Susilo, 2012; Palermo et al., 2017; Palermo, Rivolta, Wilson, & Jeffery, 2011; Rhodes, 2017; Rhodes, Ewing, Jeffery, Avard, & Taylor, 2014; Rhodes, Jeffery, Taylor, Hayward, & Ewing, 2014; Rhodes, Watson, Jeffery, & Clifford, 2010). 
Figure 1
 
A simplified face-space, showing only two dimensions, with the average at the center. Two identity trajectories are shown. Each shows a face, weaker versions of that face, and the opposite face (antiface). Adapting to antiDan biases perception toward the opposite face (Dan) and likewise, adapting to antiJim biases perception toward Jim. From “Face Identity Aftereffects Increase Monotonically with Adaptor Extremity Over, But Not Beyond, the Range of Natural Faces,” by E. McKone, L. Jeffery, A. Boeing, C. W. G. Clifford, and G. Rhodes, 2014, Vision Research, 98, p. 2. Copyright 2014 by Elsevier. Reprinted with permission.
Figure 1
 
A simplified face-space, showing only two dimensions, with the average at the center. Two identity trajectories are shown. Each shows a face, weaker versions of that face, and the opposite face (antiface). Adapting to antiDan biases perception toward the opposite face (Dan) and likewise, adapting to antiJim biases perception toward Jim. From “Face Identity Aftereffects Increase Monotonically with Adaptor Extremity Over, But Not Beyond, the Range of Natural Faces,” by E. McKone, L. Jeffery, A. Boeing, C. W. G. Clifford, and G. Rhodes, 2014, Vision Research, 98, p. 2. Copyright 2014 by Elsevier. Reprinted with permission.
However, a critical limitation of research using adaptation and aftereffects is that it has almost exclusively used static images of faces. This reliance on static images constrains the conclusions that can be drawn about both the nature of face coding mechanisms and the functional role of adaptation in everyday face perception. The use of static faces in adaptation studies can allow for substantial contributions from adaptation of low-level visual mechanisms, making it difficult to isolate the effects of high-level, face-selective adaptation from adaptation of low-level mechanisms. Studies have successfully overcome this limitation to some degree by showing that static face aftereffects do partially transfer across changes in size, position, expression, and viewpoint between static adapt and test faces, indicating that face aftereffects cannot be entirely attributed to low-level adaptation (Afraz & Cavanagh, 2008; Fox, Oruç, & Barton, 2008; Jeffery, Rhodes, & Busey, 2006; Jiang, Blanz, & O'Toole, 2006; Zhao & Chubb, 2001). However, transfer studies typically do not vary multiple attributes concurrently. Rather, they vary only one attribute at a time. For example, viewpoint may be varied between adapt and test, but lighting and facial expression remain constant. Yet when we observe dynamic faces in everyday life many attributes vary concurrently. Given the substantial reduction in aftereffects when only one attribute is varied between adapt and test it is possible that adapting to the dynamic faces we see in real life could fail to produce aftereffects, or result in aftereffects that are trivially small. Such a finding would present a challenge for the argument that adaptation plays a critical functional role in face perception. Failure to find aftereffects for dynamic faces would also raise doubts about the degree to which aftereffects are informative about high-level face coding mechanisms because these mechanisms must process information from dynamic faces in everyday life. 
Alternatively, if adaptation does play an important functional role in everyday face perception then use of dynamic, rather than static, faces may produce large and robust aftereffects. It is also possible that use of dynamic faces may better tap high-level face coding mechanisms which presumably respond strongly to the dynamic faces we typically see. There is also evidence that dynamic faces convey additional information about attributes such as expression and identity, beyond that conveyed by static faces (e.g., Ambadar, Schooler, & Cohn, 2005; Roark, Barrett, Spence, Abdi, & O'Toole, 2003). In the case of face identification, motion may enhance the extraction and representation of the three-dimensional structure of a face, provide additional cues to identity via idiosyncratic movement and may enhance attention to the internal features of the face (for a review, see Roark et al., 2003). Adaptation to dynamic faces may therefore affect face representations in ways that adaptation to static faces cannot. It is plausible, therefore, that adapting to dynamic faces would produce robust and even enhanced aftereffects. 
Very few studies have examined aftereffects for dynamic faces. Two studies have shown that dynamic facial expressions produce expression aftereffects (Curio, Giese, Breidt, Kleiner, & Bülthoff, 2010; de la Rosa, Giese, Bülthoff, & Curio, 2013). However, to our knowledge, only one study has investigated whether dynamic faces can produce face identity aftereffects. Laurence, Hole, and Hills (2014) found that adapting to a dynamic familiar face (such as a university lecturer) produced an identity aftereffect. The strength of the aftereffect for the dynamic adaptor did not differ from that for the static adaptor. 
However, the source of the adaptation effects observed in this study is less certain. Aftereffects for complex stimuli, like faces, can arise from adaptation at different levels of the visual and cognitive system. So, the resulting effects may be more or less informative about coding of high-level, possibly face specific, visual information, depending on the relative contributions from different sources (for discussion, see Storrs, 2015; Webster, 2015). Laurence et al. (2014) did not include a size change or other control for low-level adaptation in their study so their aftereffects likely reflect some contribution from low-level adaptation. In addition, their effects could reflect semantic adaptation, which may occur when there is a semantic relationship between the adapt and test stimuli (Hills, Elward, & Lewis, 2008, 2010). Crucially, semantic adaptation effects can be produced in the absence of any visual adaptation, as long as there is a semantic relationship between the adapt and test stimuli. The Laurence et al. (2014) results could reflect semantic adaptation because they used familiar faces in a paradigm that is susceptible to semantic effects. To explain, they used a common adaptation paradigm in which morph continua are made between two faces (e.g., Dan-Jim) and on each trial, participants choose which of these two faces they perceive a test stimulus taken from this continuum to be. The adapting faces are the ends of each continuum and adapting to one end (e.g., Dan) biases responses toward the other end of the continuum (e.g., Jim). Recently it has been shown that adaptation to the names or voices of the two individuals comprising the continuum can bias responses along the morph continuum in exactly the same way as adaptation to their faces. Even adaptation to the name of an individual who is semantically related to one identity in the continuum (e.g., each member of a comedy duo) can bias perception toward the related face (Hills et al., 2008, 2010). Likewise, face gender aftereffects can be induced by adaptation to gender-typical objects (however, see Ghuman, McDaniel, & Martin, 2010; Javadi & Wee, 2012). It is possible, therefore, that the aftereffects for dynamic faces found by Laurence et al. (2014) may reflect semantic adaptation because adapting to a familiar lecturer's face would likely activate relevant semantic information about that lecturer (e.g., name) which could be sufficient to induce aftereffects. It therefore remains an open question whether dynamic faces can induce face identity aftereffects in the absence of such semantic and low-level effects. 
The aim of the present study was to determine whether dynamic adaptors produce robust and substantial face identity aftereffects when the opportunity for low-level and semantic adaptation effects is minimal. To do so, we used a paradigm in which participants are not aware of the relationship between the adaptors and the test faces so that any aftereffects cannot be attributed to any semantic relationships between the adapt and test stimuli. Participants first learned to recognize four unfamiliar faces (targets) and subsequently identified weaker versions of each target face (morphed toward the average face) after adapting to that target's opposite face (antiface, see Figure 1) or a non-opposite face (another target's antiface; Leopold et al., 2001; Rhodes & Jeffery, 2006). Adapting to an opposite face biases perception selectively toward the corresponding target, facilitating recognition of this target, whereas adapting to a non-opposite face does not facilitate recognition of the target. Crucially, however, participants are not aware of this relationship (opposite or non-opposite) between the adaptors and the test faces, nor do they possess any semantic information about the faces, so semantic adaptation can be ruled out. In addition, we included a size change between the adapt and test stimuli to minimize the contribution of low-level adaptation to the aftereffects. 
Our adapting stimuli were drawn from a pre-existing set of video clips of young men (Rhodes et al., 2011). The dynamic clips conveyed two types of face motion, rigid and non-rigid, which have sometimes been found to have differential effects on face recognition (Christie & Bruce, 1998). Rigid motion refers to changes in head position which provide different views of the face, such as turning the head to the left or right, but the face itself does not change or deform (O'Toole, Roark, & Abdi, 2002). Non-rigid motion (sometimes called intrinsic motion) comprises changes in eye-gaze, facial expression and speech movements (O'Toole et al., 2002). Both kinds of motion facilitate face identity recognition (Lander & Bruce, 2003; Pike, Kemp, Towell, & Phillips, 1997). The reason for this facilitation is not yet clear, but one possibility is that the extraction of structural information about the face is enhanced. Another is that the social signals conveyed by motion increase attention to the face (at least for unfamiliar faces; Roark et al., 2003). We took advantage of the fact that the clips in our database could be easily be cut into sequences that conveyed primarily rigid (the subject turned his head to his right, then to his left profile, while maintaining a neutral expression) or non-rigid (the subject looked directly at the camera and counted from seven to 13 and then smiled) motion to examine aftereffects for rigid and non-rigid adaptors separately. 
Given that both types of motion are encountered in the real-world, we expected that both rigid and non-rigid motion adaptors would produce face identity aftereffects, if adaptation does play an important role in face identity perception and aftereffects reflect the operation of high-level face-sensitive mechanisms. We also included a static adaptation condition. This allowed us to establish that our stimuli and task produced typical aftereffects, when adaptors were static, and to determine whether aftereffect strength was affected by use of dynamic adaptors. As noted earlier, aftereffects for static faces are typically reduced when adapt and stimuli differ substantially, in viewpoint for example, suggesting that dynamic adaptors, which differ in a variety of ways from the static test faces, may produce smaller aftereffects than static adaptors. 
Method
Participants
Twenty-six first-year psychology students (15 women; 11 men) at the University of Western Australia participated for course credit. Their mean age was 21.1 years (SD = 8.5). All provided informed consent prior to participation (in accordance with the ethical standards stated in the 1964 Declaration of Helsinki) and the study was approved by the University of Western Australia Human Ethics Office. 
Stimuli
Stimuli were constructed from a set comprising both static images and dynamic video clips of 60 male faces used in a previous study (Rhodes et al., 2011). The dynamic clips in this set showed an unedited sequence in which each individual was shown first in a front-on view and then turned his head to his right profile, then to his left profile, and then back to a front-on view, then he counted from seven to 13 and then smiled (10 s in length). The static images were screen grabs from the video of front-view, neutral expression poses. 
Four easily discriminable faces from this set were chosen by the authors to be the adapting identities. The static images of these four faces were used as the adaptors for the static condition. To create the dynamic adaptors we cut each video clip into two segments (each exactly 4 s in length), according to the type of movement (rigid or non-rigid). The rigid adaptor clip showed the sequence containing the head movement (from front view to right, then left profile and back to front view). The non-rigid adaptor clip showed the individual counting from eight to 13, and then smiling. Sound was not played in either of the clips. Figure 2 illustrates the adapting sequences. 
Figure 2
 
Sample frames illustrating the adapting sequences showing (top) static, (middle) rigid, and (bottom) non-rigid dynamic conditions. The individual shown here comes from the same set but was not one of the four identities used in the actual study. We only have permission to publish images of this one individual from the set.
Figure 2
 
Sample frames illustrating the adapting sequences showing (top) static, (middle) rigid, and (bottom) non-rigid dynamic conditions. The individual shown here comes from the same set but was not one of the four identities used in the actual study. We only have permission to publish images of this one individual from the set.
The test stimuli were the computational opposites (antifaces) of the four adaptors, created using morphing software (as described below). We note that most studies of identity aftereffects use antifaces as the adaptors and the original faces (from which the antifaces were constructed) as the test stimuli (e.g., Leopold et al., 2001; Rhodes & Jeffery, 2006). However, this choice is arbitrary and the same aftereffects are predicted if the original faces are used as adaptors and the antifaces as the test stimuli. Given the difficulty in constructing dynamic antifaces, we chose to here use the antifaces as test stimuli (static) and use the video clips of the original faces as the adaptors. The antifaces were made in the typical way, by caricaturing an average face away from the original identity using FantaMorph 5.2.6 (Abrosoft, 2002–2015). To create this average face, the static front-view images of 20 faces, including the four adaptor identities, were taken from the larger set of 60 faces and combined using Abrosoft FantaMorph Software Version 5.3.6 (Abrosoft, 2002–2015). To create the antifaces the researcher places multiple landmark points on each face, which are used to morph the face towards or away from a reference face, in this case the average face. The test stimuli varied in identity strength in 20% increments, on a continuum ranging from −20% to 100%, where 0% is the average face, 100% is the antifaces, and −20% is a morph toward the original face (−20%, 0%, 20%, 40%, 60%, 80%, 100%; see Figure 3 for an example). FantaMorph produces linear changes in landmark locations with changes in morph values: for example, each landmark location in the 80% antiDan image lies twice as far from the average face (0% morph value) as that landmark location lies away from the average face in the 40% antiDan image. All images had the textures of the average face (0%), so that variations in identity strength altered shape information only. The four 100% antifaces were used as the four targets that participants learned to identity (see Figure 4). 
Figure 3
 
A sample test continuum created for the individual shown in Figure 2. The numbers show the morph strength for each face, with 0 corresponding to the average face, 100 indicating a 100% antifaces, and −20 indicating a morph toward the corresponding adapting face.
Figure 3
 
A sample test continuum created for the individual shown in Figure 2. The numbers show the morph strength for each face, with 0 corresponding to the average face, 100 indicating a 100% antifaces, and −20 indicating a morph toward the corresponding adapting face.
Figure 4
 
The four targets (100% antifaces) that participants learned to identify.
Figure 4
 
The four targets (100% antifaces) that participants learned to identify.
To minimize the contribution of low-level adaptation to the aftereffects the test stimuli were reduced in size (by 25%) so that they would be smaller than the adapting stimuli. The test faces had a visual angle of approximately 6.8° (height) × 4.6° (width). The adaptors were shown at visual angles of approximately 9.2°(h) × 6.3° (w) for the static condition, 11.7° (h) × 8° (w) for the rigid condition, and 11.3° (h) × 7.5° (h) for the non-rigid condition. 
Procedure
The task comprised five parts, two training phases followed by the three movement conditions (static, rigid, and non-rigid). The order of the movement conditions was counterbalanced and participants were pseudo-randomly assigned to an order to ensure an approximately equal number of participants received each order. The task was presented as a game, in which detectives caught criminals, to engage and motivate participants and is similar to adaptation tasks used in many previous studies (e.g., Jeffery et al., 2011; Rhodes, Ewing et al., 2014; Rhodes & Jeffery, 2006; Rhodes, Jeffery et al., 2014). 
Training
The purpose of the training trials was to ensure that participants could reliably discriminate and identify the four target identities when presented at full strength (100%) for brief durations (200 ms) and that participants also understood how to respond to the weaker versions (<100%). In the first training phase, participants learned to recognize the four targets, described as “lead detectives”—Dean, Brad, Josh, and Mike (100% identity strength antifaces). Participants responded using the keyboard keys labeled with the targets' names: x for Dean, v for Brad, n for Josh, and “,” (comma) for Mike. Faces were first presented for an unlimited duration with audio feedback regarding accuracy on each trial. In order to progress to the next phase of training, participants had to either correctly respond on the first six trials or make 10 correct responses in the first 12 trials, or after completing 28 trials. In the second phase, the faces were shown for 200 ms and the same criteria determined proceeding to the next phase. In the third training phase, participants were introduced to the other members of each detective squad. Each squad comprised the weaker versions (40% and 60%) of one of the targets, who were described as each lead detective's brothers. Participants were instructed to press the name key corresponding to the lead detective whenever they saw him or one of his squad. This phase ensured participants understood how to respond to weaker versions of the four targets. Faces were presented for an unlimited duration with audio feedback regarding accuracy on each trial, and participants proceeded once they made nine correct identifications across 12 consecutive trials or had completed 56 trials. In the final phase, the faces were presented for 200 ms and the same criteria determined when participants proceeded to the task. 
Adaptation task
Participants were told that on each trial they would be shown the face of a criminal (the adapting faces) that they should watch closely. The face of the criminal would be followed by a brief presentation of the detective who had caught the criminal and the participant's task was to identify which squad (Dean, Brad, Josh, or Mike) the detective belonged to. 
The same procedure was used for each of the three movement conditions, static, rigid, and non-rigid. Each adaptation trial began with an adapting video clip (static, rigid, or non-rigid), which lasted for 4000 ms, followed by an inter-stimulus interval (ISI) of 150 ms. The test face was then shown for 200 ms, followed by a blank response screen, which remained until a response was made. A 300 ms inter-trial interval (ITI) followed each response. 
Each movement condition consisted of 168 trials; 4 (test identities; Dean, Brad, Josh, and Mike) × 7 (identity strengths; −20%, 0%, 20%, 40%, 60%, 80%, and 100%) × 2 (trial types; match, mismatch) × 3 (repetitions of every trial). On match trials the adapt and test identities were opposites, taken from the same identity continuum (e.g., adapt to the original face from which Dean was constructed and test with a face from the Dean continuum), whereas on mismatch trials the adapt and test stimuli were not opposites and were taken from a different identity continuum (e.g., adapt to the original face from which Dean was constructed and test with a face from the Mike continuum). Mismatch continua were paired so that Dean and Mike were always each other's mismatch and Brad and Josh were always each other's mismatch. This pairing ensured that participants could not learn the association between adapt and test identities on match trials because test faces from each identity continuum were preceded equally often by the corresponding opposite face (match) and one non-opposite (mismatching) face. 
Trials in each movement condition were divided into four blocks of 42 trials, presented in a random order, with a self-timed break between each block. Each movement condition commenced with four practice trials, which were match trials featuring two 80% and two 60% test faces. Participants took self-timed breaks between each adaptation condition. The session took between 75 and 90 minutes. 
Results
For each participant the proportion of correct identifications was plotted as a function of identity strength, and fitted with a cumulative Gaussian for both match and mismatch trials, in each movement condition (following Leopold et al., 2001) using GraphPad Prism 5 (IBM Corp., 2013). For 0% test faces (average face), one quarter of the trials were assigned to each of the four target identities, and were coded as correct if participants identified the faces as corresponding to the identity trajectory of the given target face (following Leopold et al., 2001). Data from five participants were excluded from further analysis due to inconsistent patterns of performance accompanied by poor curve fits (defined as R2 < 0.50; see Jeffery, McKone, Haynes, Firth, Pellicano, & Rhodes, 2010; Rhodes, Evangelista, & Jeffery, 2009). This left 21 participants (13 females, Mage= 21.8 years, SD = 9.4) with good fits (M = 0.85, range = 0.53 −0.99). The group data, as well as data from one representative participant, are shown in Figure 5
Figure 5
 
The top row shows the group results. Mean proportion correct responses are shown for match (closed circles) and mismatch (open circles) trials as a function of identity strength for (a) static, (b), rigid and (c) non-rigid movement conditions. The bottom row shows the results for a representative individual participant. Proportion correct responses as a function of identity strength for match and mismatch trials, fitted with cumulative Gaussians, are shown for (d) static, (e) rigid, and (f) non-rigid movement conditions. Error bars show one standard error either side of the mean.
Figure 5
 
The top row shows the group results. Mean proportion correct responses are shown for match (closed circles) and mismatch (open circles) trials as a function of identity strength for (a) static, (b), rigid and (c) non-rigid movement conditions. The bottom row shows the results for a representative individual participant. Proportion correct responses as a function of identity strength for match and mismatch trials, fitted with cumulative Gaussians, are shown for (d) static, (e) rigid, and (f) non-rigid movement conditions. Error bars show one standard error either side of the mean.
Inspection of Figure 5 suggests that adaptation occurred in each of the three movement conditions. Match curves are positioned to the left of mismatch curves, reflecting that adaptation to an opposite face (match trials) resulted in more accurate identification of targets than adaptation to non-opposite faces (mismatch trials). To measure the strength of adaption in each movement condition we first calculated a 50% identification threshold (the mean of the cumulative Gaussian) for match and mismatch trials, for each participant, in each of three movement conditions. The identity aftereffect for each participant in each movement condition was taken as the difference in identification thresholds between match and mismatch trials (mismatch minus match) so that a positive difference indicated an aftereffect in the predicted direction. The group mean and individual aftereffects in each movement condition are shown in Figure 6
Figure 6
 
The mean (gray bars) and individual (circles) aftereffects in each movement condition. Error bars show one standard error either side of the mean.
Figure 6
 
The mean (gray bars) and individual (circles) aftereffects in each movement condition. Error bars show one standard error either side of the mean.
We analyzed our data using both classical and Bayesian statistics (using IBM Statistics for Mac, Version 21, and JASP, Version 0.8.6, respectively; IBM Corp., 2013; JASP Team, 2018) and report both below. Assumptions for the classical statistics were met, with data in each condition showing sufficiently normal distributions (skews −0.261 −0.441; kurtosis −0.645 −0.234; Kendall & Stuart, 1958) and the sphericity assumption for ANOVA met (Mauchly's W = 0.961, p = 0.682). For the Bayesian analyses, we report the Bayes factor (BF10), which provides a ratio of the strength of evidence for the alternative over the null models (Jeffreys, 1961). We interpret these following the guidelines suggested by Lee and Wagenmakers (2013, as cited by Wagenmakers et al., 2018) so that BFs between 0.33 and 3.0 are considered weak or anecdotal evidence for the null and alternate hypotheses, respectively, and BFs less than 0.33, or greater than 3.0 are considered stronger evidence for the null and alternate hypotheses, respectively. 
We first confirmed that the aftereffects in each condition were significantly greater than zero: static, t(20) = 4.27, p < 0.001, r = 0.69, BF10 = 86.18; rigid, t(20) = 8.53, p < 0.001, r = 0.89, BF10 = 3.28 × 105; non-rigid, t(20) = 7.00, p < 0.001; r = 0.84, BF10 = 2.00 × 104).1 These results provide very strong evidence that both static and moving adaptors (rigid and non-rigid) produced adaptation effects (both BFs > 100). We next examined whether movement condition affected the size of the aftereffects. A one-way repeated measures ANOVA showed a significant effect of movement condition, F(2, 40) = 3.36, p = 0.045, partial η2 = 0.144, BF10 = 1.65. Planned t tests indicated that aftereffects were larger for non-rigid (M = 19.8, SD = 12.7) than static adaptation (M = 12.0, SD = 12.9), t(20) = 2.43, p = 0.025, r = 0.48, BF10 = 2.38. The difference between the non-rigid and rigid conditions (M = 14.0, SD = 7.5) was marginal, t(20) = 2.04, p = 0.055, r = 0.42, BF10 = 1.27 and there was no significant difference between the static and rigid conditions, t(20) = 0.64, p = 0.532, r = 0.14, BF10 = 0.273. However, inspection of the Bayes factors suggests only weak, equivocal support for a difference between the static and either of the moving conditions (1< BFs <3) but moderate support for there being no difference between the two moving conditions (BF < 0.33). 
Discussion
We found that adaptation to moving faces produced robust identity aftereffects. These aftereffects were at least as large as those produced by adaptation to static faces. This result clearly demonstrates that we can adapt to identity in moving faces and is consistent with other evidence that invariant information about the structure of a face (identity) can be extracted from moving faces (e.g., Knappmeyer, Thornton, & Bülthoff, 2003). We were also able to rule out semantic (nonperceptual) adaptation as the source of these aftereffects because participants were not aware of any relationship between the adapt and test faces. Finally, our results demonstrate that face identity aftereffects can be very robust to substantial low-level variation, because our adapt and test stimuli simultaneously differed in size, texture, and color, and showed robustness to variation in mid-level shape information caused by changes in viewpoint and movement of facial muscles. This robustness suggests that face identity aftereffect can reflect adaptation of high-level visual representations that are critical for encoding face identity. 
Importantly, by showing that moving faces produce identity aftereffects as readily as static faces our results suggest that face adaptation likely occurs in everyday life, where faces are often dynamic. In particular, our finding that moving faces can produce adaptation effects is critical for the proposal that adaptation plays a functional role in face perception. Adaptation is argued to calibrate face norms, optimizing our ability to discriminate between faces (e.g., Dennett et al., 2012; Palermo et al., 2017; Rhodes, Jeffery, et al., 2014; Rhodes et al., (2010). Support for this argument comes indirectly from studies showing that face adaptation effects are reduced in clinical groups who experience face recognition difficulties, such as individuals with prosopagnosia (Palermo et al., 2011), children with autism and their relatives (Ewing, Leach, Pellicano, Jeffery, & Rhodes, 2013; Ewing, Pellicano, & Rhodes, 2013; Fiorentini, Gray, Rhodes, Jeffery, & Pellicano, 2012; Pellicano, Jeffery, Burr, & Rhodes, 2007; Pellicano, Rhodes, & Calder, 2013; Pimperton, Pellicano, Jeffery, & Rhodes, 2009) and patients whose early vision in infancy was compromised by congenital bilateral cataracts (Rhodes, Nishimura, de Heering, Jeffery, & Maurer, 2017). Individual variation in face recognition ability in typical individuals is also positively associated with the strength of face adaptation effects (Dennett et al., 2012; Palermo et al., 2017; Rhodes, Jeffery et al., 2014). The emerging picture from these studies is that attenuated, or sluggish, adaptation might lead to poorly calibrated face norms, which in turn results in poorer ability to discriminate and remember faces. Our finding that adaptation occurs for video footage of moving faces strengthens this argument by showing that face norms could be calibrated by adaptation to the moving faces that we encounter in the real world. 
A striking aspect of our results was that aftereffects for moving adaptors were not any smaller than those produced by the static adaptors. This is noteworthy because the low-level differences between the adapt and test stimuli were much greater in the moving than in the static adaptation conditions, so smaller aftereffects would have been predicted for the moving adaptors if adaptation of low-level attributes contribute substantially to face identity aftereffects. Indeed, adaptation to faces in one of the moving conditions (non-rigid motion) produced significantly larger aftereffects than in the static condition. However, the Bayesian analyses did not provide any support for this difference. It remains an open question whether moving faces may more strongly activate high-level, possibly face-specific, face representations than do static faces. Stronger activation of neural populations coding high-level face information could result in stronger adaptation effects, even in the absence of any contribution from low-level adaptation. So, stronger high-level activation could account for why we saw no reduction in the strength of aftereffects for moving versus static faces, despite the considerable low-level differences between moving adaptors and the test stimuli. Overall, our results strengthen the argument that face aftereffects can be informative about high-level perceptual coding of faces. 
In conclusion, the current study has shown that moving faces can produce robust identity aftereffects, reinforcing claims that face aftereffects can be used as a tool to probe the mechanisms underlying face perception and that adaptation may play an important role in our face perception abilities by calibrating these mechanisms in everyday life. 
Acknowledgments
This research was supported by Australian Research Council Centre of Excellence in Cognition and its Disorders (CE110001021) and an ARC Professorial Fellowship to Rhodes (DP0877379). 
Commercial relationships: none. 
Corresponding author: Linda Jeffery. 
Address: ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Crawley, Western Australia, Australia. 
References
Abrosoft. (2002–2015). FantaMorph [computer software]. www.abrosoft.com
Afraz, S.-R., & Cavanagh, P. (2008). Retinotopy of the face aftereffect. Vision Research, 48, 42–54, https://doi.org/10.1016/j.visres.2007.10.028.
Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16 (5), 403–410, https://doi.org/10.1111/j.0956-7976.2005.01548.x.
Anderson, N. D., & Wilson, H. R. (2005). The nature of synthetic face adaptation. Vision Research, 45 (14), 1815–1828, https://doi.org/10.1016/j.visres.2005.01.012.
Barlow, H. B. (1990). A theory about the functional role and synaptic mechanism of visual after-effects. In Blakemore C. (Ed.), Vision: Coding and efficiency. Cambridge, UK: Cambridge University Press.
Christie, F., & Bruce, V. (1998). The role of dynamic information in the recognition of unfamiliar faces. Memory & Cognition, 26 (4), 780–790, https://doi.org/10.3758/bf03211397.
Clifford, C. W., Wenderoth, P., & Spehar, B. (2000). A functional angle on some after-effects in cortical vision. Proceedings of the Royal Society of London, Series B: Biological Sciences, 267 (1454), 1705–1710.
Curio, C., Giese, M., Breidt, M., Kleiner, M., & Bülthoff, H. (2010). Recognition of dynamic facial action probed by visual adaptation. In Curio, C. Giese, M. & Bülthoff H. (Eds.), Dynamic faces: Insights from experiments and computation (pp. 47–65). Cambridge, MA: MIT Press.
de la Rosa, S., Giese, M., Bülthoff, H. H., & Curio, C. (2013). The contribution of different cues of facial movement to the emotional facial expression adaptation aftereffect. Journal of Vision, 13 (1): 23, 1–15, https://doi.org/10.1167/13.1.23. [PubMed] [Article]
Dennett, H. W., McKone, E., Edwards, M., & Susilo, T. (2012). Face aftereffects predict individual differences in face recognition ability. Psychological Science, 23 (11), 1279–1287, https://doi.org/10.1177/0956797612446350.
Ewing, L., Leach, K., Pellicano, E., Jeffery, L., & Rhodes, G. (2013). Reduced face aftereffects in autism are not due to poor attention. PLoS One, 8 (11), e81353, https://doi.org/10.1371/journal.pone.0081353.
Ewing, L., Pellicano, E., & Rhodes, G. (2013). Atypical updating of face representations with experience in children with autism. Developmental Science, 16 (1), 116–123, https://doi.org/10.1111/desc.12007.
Fiorentini, C., Gray, L., Rhodes, G., Jeffery, L., & Pellicano, E. (2012). Reduced face identity aftereffects in relatives of children with autism. Neuropsychologia, 50 (12), 2926–2932, https://doi.org/10.1016/j.neuropsychologia.2012.08.019.
Fox, C. J., Oruç, I., & Barton, J. J. S. (2008). It doesn't matter how you feel. The facial identity aftereffect is invariant to changes in facial expression. Journal of Vision, 8 (3): 11, 1–13, https://doi.org/10.1167/8.3.11. [PubMed] [Article]
Ghuman, A. S., McDaniel, J. R., & Martin, A. (2010). Face adaptation without a face. Current Biology, 20 (1), 32–36.
Hills, P. J., Elward, R. L., & Lewis, M. B. (2008). Identity adaptation is mediated and moderated by visualisation ability. Perception, 37, 1241–1257.
Hills, P. J., Elward, R. L., & Lewis, M. B. (2010). Cross-modal face identity aftereffects and their relation to priming. Journal of Experimental Psychology: Human Perception and Performance, 36 (4), 876–891, https://doi.org/10.1037/a0018731.
IBM Corp. (2013). SPSS statistics for Mac, version 21. Armonk, NY: IBM Corp.
JASP Team. (2018). JASP (version 0.8.6). http//jasp-stats.org
Javadi, A. H., & Wee, N. (2012). Cross-category adaptation: Objects produce gender adaptation in the perception of faces. PLoS One, 7 (9), e46079, https://doi.org/10.1371/journal.pone.0046079.
Jeffery, L., Rhodes, G., & Busey, T. (2006). View-specific coding of face shape. Psychological Science, 17 (6), 501–505, https://doi.org/10.1111/j.1467-9280.2006.01735.x.
Jeffery, L., McKone, E., Haynes, R., Firth, E., Pellicano, E., & Rhodes, G. (2010). Four-to-six-year-old children use norm-based coding in face-space. Journal of Vision, 10 (5): 18, 1–19, https://doi.org/10.1167/10.5.18. [PubMed] [Article]
Jeffery, L., Rhodes, G., McKone, E., Pellicano, E., Crookes, K., & Taylor, E. (2011). Distinguishing norm-based from exemplar-based coding of identity in children: Evidence from face identity aftereffects. Journal of Experimental Psychology: Human Perception and Performance, 37 (6), 1824–1840, https://doi.org/10.1037/a0025643.
Jeffreys, H. (1961). Theory of probability (Oxford classic texts in physical sciences) (3rd ed.). New York, NY: Oxford University Press.
Jiang, F., Blanz, V., & O'Toole, A. J. (2006). Probing the visual representation of faces with adaptation: A view from the other side of the mean. Psychological Science, 17 (6), 493–500.
Kendall, M., & Stuart, A. (1958). The advanced theory of statistics. London, UK: Griffin.
Knappmeyer, B., Thornton, I. M., & Bülthoff, H. H. (2003). The use of facial motion and facial form during the processing of identity. Vision Research, 43 (18), 1921–1936, https://doi.org/10.1016/S0042-6989(03)00236-0.
Lander, K., & Bruce, V. (2003). The role of motion in learning new faces. Visual Cognition, 10 (8), 897–912, https://doi.org/10.1080/13506280344000149.
Laurence, S., Hole, G. J., & Hills, P. J. (2014). Lecturers' faces fatigue their students: Face identity aftereffects for dynamic and static faces. Visual Cognition, 22 (8), 1072–1083, https://doi.org/10.1080/13506285.2014.950364.
Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge, UK: Cambridge University Press.
Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4 (1), 89–94, https://doi.org/10.1038/82947.
MacLin, O. H., & Webster, M. A. (2001). Influence of adaptation on the perception of distortions in natural images. Journal of Electronic Imaging, 10 (1), 100–109, https://doi.org/10.1117/1.1330573.
Mather, G., Verstraten, F., & Anstis, S. (1998). The motion aftereffect: A modern perspective. Cambridge, MA: MIT Press.
McKone, E., Jeffery, L., Boeing, A., Clifford, C.W.G., & Rhodes, G. (2014) Face identity aftereffects increase monotonically with adaptor extremity over, but not beyond, the range of natural faces. Vision Research, 98, 1–13, https://doi.org/10.1016/j.visres.2014.01.007.
O'Leary, A., & McMahon, M. (1991). Adaptation to form distortion of a familiar shape. Perception & Psychophysics, 49 (4), 328–332.
O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Sciences, 6 (6), 261–266, https://doi.org/10.1016/S1364-6613(02)01908-3.
Palermo, R., Jeffery, L., Lewandowsky, J., Fiorentini, C., Irons, J. L., Dawel, A.,… Rhodes, G. (2017). Adaptive face coding contributes to individual differences in facial expression recognition independently of affective factors. Journal of Experimental Psychology: Human Perception and Performance, 44 (4), 503–517, https://doi.org/10.1037/xhp0000463.
Palermo, R., Rivolta, D., Wilson, C. E., & Jeffery, L. (2011). Adaptive face space coding in congenital prosopagnosia: Typical figural aftereffects but abnormal identity aftereffects. Neuropsychologia, 49 (14), 3801–3812, https://doi.org/10.1016/j.neuropsychologia.2011.09.039.
Pellicano, E., Jeffery, L., Burr, D., & Rhodes, G. (2007). Abnormal adaptive face-coding mechanisms in children with autism spectrum disorder. Current Biology, 17 (17), 1508–1512.
Pellicano, E., Rhodes, G., & Calder, A. J. (2013). Reduced gaze aftereffects are related to difficulties categorising gaze direction in children with autism. Neuropsychologia, 51 (8), 1504–1509, https://doi.org/10.1016/j.neuropsychologia.2013.03.021.
Pike, G., Kemp, R., Towell, N., & Phillips, K. (1997). Recognizing moving faces: The relative contribution of motion and perspective view information. Visual Cognition, 4, 409–437, https://doi.org/10.1080/713756769.
Pimperton, H., Pellicano, E., Jeffery, L., & Rhodes, G. (2009). The role of higher level adaptive coding mechanisms in the development of face recognition. Journal of Experimental Child Psychology, 104 (2), 229–238, https://doi.org/10.1016/j.jecp.2009.05.009.
Rhodes, G. (2017). Adaptive coding and face recognition. Current Directions in Psychological Science, 26 (3), 218–224, https://doi.org/10.1177/0963721417692786.
Rhodes, G., Evangelista, E., & Jeffery, L. (2009). Orientation-sensitivity of face identity aftereffects. Vision Research, 49 (19), 2379–2385, https://doi.org/doi:10.1016/ j.visres.2009.07.010.
Rhodes, G., Ewing, L., Jeffery, L., Avard, E., & Taylor, L. (2014). Reduced adaptability, but no fundamental disruption, of norm-based face-coding mechanisms in cognitively able children and adolescents with autism. Neuropsychologia, 62 (1), 262–268, https://doi.org/10.1016/j.neuropsychologia.2014.07.030.
Rhodes, G., & Jeffery, L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46 (18), 2977–2987, https://doi.org/10.1016/j.visres.2006.03.002.
Rhodes, G., Jeffery, L., Taylor, L., Hayward, W., & Ewing, L. (2014). Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability. Journal of Experimental Psychology: Human Perception and Performance, 40 (3), 897–903, https://doi.org/10.1037/a0035939.
Rhodes, G., & Leopold, D. A. (2011). Adaptive norm-based coding of face identity. In Calder, A. J. Rhodes, G. Johnston, M. H. & Haxby J. V. (Eds.), Handbook of face perception (pp. 263–286). Oxford, UK: Oxford Univerity Press.
Rhodes, G., Lie, H. C., Thevaraja, N., Taylor, L., Iredell, N., Curran, C.,… Simmons, L. W. (2011). Facial attractiveness ratings from video-clips and static images tell the same story. PLoS One, 6 (11), e26653, https://doi.org/10.1371/journal.pone.0026653.
Rhodes, G., Nishimura, M., de Heering, A., Jeffery, L., & Maurer, D. (2017). Reduced adaptability, but no fundamental disruption, of norm-based face coding following early visual deprivation from congenital cataracts. Developmental Science, 20 (3): e12384, https://doi.org/10.1111/desc.12384.
Rhodes, G., Watson, T. L., Jeffery, L., & Clifford, C. W. G. (2010). Perceptual adaptation helps us identify faces. Vision Research, 50 (10), 963–968, https://doi.org/10.1016/j.visres.2010.03.003.
Roark, D. A., Barrett, S. E., Spence, M. J., Abdi, H., & O'Toole, A. J. (2003). Psychological and neural perspectives on the role of motion in face recognition. Behavioral and Cognitive Neuroscience Reviews, 2 (1), 15–46, https://doi.org/10.1177/1534582303002001002.
Storrs, K. R. (2015). Are high-level aftereffects perceptual? Frontiers in Psychology, 6: 157, https://doi.org/10.3389/fpsyg.2015.00157.
Thompson, P., & Burr, D. (2009). Visual aftereffects. Current Biology, 19 (1), R11–R14, https://doi.org/10.1016/j.cub.2004.09.011.
Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J.,… Morey, R. D. (2018). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review, 25 (1), 58–76, https://doi.org/10.3758/s13423-017-1323-7.
Watson, T. L., Rhodes, G., & Clifford, C. W. G. (2006). Neural coding of faces adapts to help us identify those around us. Submitted.
Webster, M. A. (2015). Visual adaptation. Annual Review of Vision Science, 1 (1), 547–567, https://doi.org/10.1146/annurev-vision-082114-035509.
Webster, M. A., & MacLeod, D. I. A. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 366 (1571), 1702–1725, https://doi.org/10.1098/rstb.2010.0360.
Webster, M. A., & MacLin, O. H. (1999). Figural aftereffects in the perception of faces. Psychonomic Bulletin & Review, 6 (4), 647–653.
Zhao, L., & Chubb, C. (2001). The size-tuning of the face-distortion after-effect. Vision Research, 41 (23), 2979–2994, https://doi.org/10.1016/S0042-6989(01)00202-4.
Footnotes
1  We note that analyses using an alternative measure of the aftereffect (the overall proportion of correct responses for match minus mismatch trials) that allowed inclusion of all 26 participants, as no curve fitting was required, produced similar results, with significant aftereffects in all three adaptation conditions.
Figure 1
 
A simplified face-space, showing only two dimensions, with the average at the center. Two identity trajectories are shown. Each shows a face, weaker versions of that face, and the opposite face (antiface). Adapting to antiDan biases perception toward the opposite face (Dan) and likewise, adapting to antiJim biases perception toward Jim. From “Face Identity Aftereffects Increase Monotonically with Adaptor Extremity Over, But Not Beyond, the Range of Natural Faces,” by E. McKone, L. Jeffery, A. Boeing, C. W. G. Clifford, and G. Rhodes, 2014, Vision Research, 98, p. 2. Copyright 2014 by Elsevier. Reprinted with permission.
Figure 1
 
A simplified face-space, showing only two dimensions, with the average at the center. Two identity trajectories are shown. Each shows a face, weaker versions of that face, and the opposite face (antiface). Adapting to antiDan biases perception toward the opposite face (Dan) and likewise, adapting to antiJim biases perception toward Jim. From “Face Identity Aftereffects Increase Monotonically with Adaptor Extremity Over, But Not Beyond, the Range of Natural Faces,” by E. McKone, L. Jeffery, A. Boeing, C. W. G. Clifford, and G. Rhodes, 2014, Vision Research, 98, p. 2. Copyright 2014 by Elsevier. Reprinted with permission.
Figure 2
 
Sample frames illustrating the adapting sequences showing (top) static, (middle) rigid, and (bottom) non-rigid dynamic conditions. The individual shown here comes from the same set but was not one of the four identities used in the actual study. We only have permission to publish images of this one individual from the set.
Figure 2
 
Sample frames illustrating the adapting sequences showing (top) static, (middle) rigid, and (bottom) non-rigid dynamic conditions. The individual shown here comes from the same set but was not one of the four identities used in the actual study. We only have permission to publish images of this one individual from the set.
Figure 3
 
A sample test continuum created for the individual shown in Figure 2. The numbers show the morph strength for each face, with 0 corresponding to the average face, 100 indicating a 100% antifaces, and −20 indicating a morph toward the corresponding adapting face.
Figure 3
 
A sample test continuum created for the individual shown in Figure 2. The numbers show the morph strength for each face, with 0 corresponding to the average face, 100 indicating a 100% antifaces, and −20 indicating a morph toward the corresponding adapting face.
Figure 4
 
The four targets (100% antifaces) that participants learned to identify.
Figure 4
 
The four targets (100% antifaces) that participants learned to identify.
Figure 5
 
The top row shows the group results. Mean proportion correct responses are shown for match (closed circles) and mismatch (open circles) trials as a function of identity strength for (a) static, (b), rigid and (c) non-rigid movement conditions. The bottom row shows the results for a representative individual participant. Proportion correct responses as a function of identity strength for match and mismatch trials, fitted with cumulative Gaussians, are shown for (d) static, (e) rigid, and (f) non-rigid movement conditions. Error bars show one standard error either side of the mean.
Figure 5
 
The top row shows the group results. Mean proportion correct responses are shown for match (closed circles) and mismatch (open circles) trials as a function of identity strength for (a) static, (b), rigid and (c) non-rigid movement conditions. The bottom row shows the results for a representative individual participant. Proportion correct responses as a function of identity strength for match and mismatch trials, fitted with cumulative Gaussians, are shown for (d) static, (e) rigid, and (f) non-rigid movement conditions. Error bars show one standard error either side of the mean.
Figure 6
 
The mean (gray bars) and individual (circles) aftereffects in each movement condition. Error bars show one standard error either side of the mean.
Figure 6
 
The mean (gray bars) and individual (circles) aftereffects in each movement condition. Error bars show one standard error either side of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×