Free
Research Article  |   March 2008
It doesn't matter how you feel. The facial identity aftereffect is invariant to changes in facial expression
Author Affiliations
Journal of Vision March 2008, Vol.8, 11. doi:https://doi.org/10.1167/8.3.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher J. Fox, Ipek Oruç, Jason J. S. Barton; It doesn't matter how you feel. The facial identity aftereffect is invariant to changes in facial expression. Journal of Vision 2008;8(3):11. https://doi.org/10.1167/8.3.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies have shown that facial expression aftereffects are modulated by the identity of the adapting face, suggesting both identity-dependent and identity-independent representations of facial expression. In this study, we asked whether facial identity aftereffects were similarly modulated by expression. In Experiment 1, the congruency of expression between adapting and test faces did not affect the identity aftereffect for novel faces, suggesting that the neural representations activated by novel identities are independent of expression. In Experiment 2, we examined whether expression dependency might be found with more familiar faces but still did not find any modulation of identity aftereffects by the congruency of expression. In Experiment 3, we measured the similarity between faces used to probe expression and identity adaptation, using both an ideal observer and human subjects, to determine if the discrepancy between the results of these two studies is related to greater similarity between faces from the same person with different expressions than between faces of different people with the same expression. However, the contrast thresholds required to discriminate between faces of differing expression were similar to those for faces with differing identity. We conclude that, in contrast to the significant identity-dependent component seen in representations of expression, representations of facial identity are independent of variations in expression.

Introduction
Faces are complex stimuli. Not only do they have complicated three-dimensional structures, but they convey a multitude of perceptual data, including information about identity, gender, race, expression, and direction of gaze, among others. Current behavioral and neuroanatomical models have proposed that the processing of these different types of information may occur in at least two streams (Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000). One stream is dedicated to the extraction of structural cues that support the perception of identity, gender, and race. Such properties are stable over time, and therefore it is hypothesized that these dimensions involve neural representations that are invariant to the dynamic elements of faces (Haxby et al., 2000). These dynamic elements may be processed by the other stream, as temporally varying information conveys key data for the perception of expression, gaze direction, and visual speech (Haxby et al., 2000). The proposal that different anatomic structures process different types of information might lead to a prediction that the perception of facial identity and the perception of facial expression are independent. However, there is growing behavioral and anatomic evidence that this is not the case and that there may be interactions between the two (Calder & Young, 2005; de Gelder, Frissen, Barton, & Hadjikhani, 2003; Fox & Barton, 2007; Ganel, Valyear, Goshen-Gottstein, & Goodale, 2005; Humphreys, Avidan, & Behrmann, 2007; Kaufmann & Schweinberger, 2004; Palermo & Rhodes, 2007; Stephan, Breen, & Caine, 2006; Winston, Henson, Fine-Goulden, & Dolan, 2004). 
Face adaptation is a recently developed method that can be used to probe the neural representations responsible for the perception of these various facial dimensions (Fox & Barton, 2007; Leopold, O'Toole, Vetter, & Blanz, 2001; Webster, Kaping, Mizokami, & Duhamel, 2004). Prolonged viewing of a particular face causes a perceptual aftereffect in which an average face is now seen as having structural properties opposite to the adapted face (Leopold et al., 2001). Aftereffects have been reported for the facial dimensions of identity, gender, race, and expression among others (Fox & Barton, 2007; Leopold et al., 2001; Webster et al., 2004). In all cases, the perceptual aftereffect biases perception of an ambiguous test face away from the adapting face along the dimension being examined. 
In earlier studies, we used adaptation to explore the nature of neural representations of facial expression in the human visual system (Butler, Oruc, Fox, & Barton, 2008; Fox & Barton, 2007). We have shown that adaptation in our paradigm is not generated at the level of local image elements such as orientation, shape, or curvature, but likely at a higher level of face representation (Butler et al., 2008). Furthermore, we have shown that the magnitude of the expression aftereffect is modulated by the identity of the adapting face (Fox & Barton, 2007). When the adapting and test faces are images of the same person, a large expression aftereffect is generated (Fox & Barton, 2007; Webster et al., 2004). An expression aftereffect is still produced even with incongruent identities (when the adapting and test faces are of different people), suggesting that at least some of the expression aftereffect can be attributed to an identity-invariant representation of expression (Fox & Barton, 2007). Of note, though, the magnitude of the expression aftereffect when using incongruent identities is less than that produced when adapting and test images are of the same person (Fox & Barton, 2007). This larger adaptation with congruent identities may suggest the existence of another neural representation of facial expression, which is specific to the identity of the adapting face (Fox & Barton, 2007). Indeed, the concept of both dependent and independent (or “invariant”) layers of representation, with the former providing converging input to the latter, is not an uncommon feature of neural network models that simulate human object recognition (Rosen, 2003). 
This finding of both identity-dependent and identity-invariant components in expression adaptation raises the question of whether a corresponding situation exists for the representation of identity. Thus, the first goal of our study was to determine if there are both expression-dependent and expression-invariant components to identity adaptation. This issue is further complicated, however, by the fact that, unlike the situation with facial expressions, where the majority of subjects have extensive experience with most facial expressions, neural representations of identity may differ in their strength, with novel faces having relatively weak representations and highly familiar faces having strong representations. Hence, a second goal of our study was to determine if the degree of facial familiarity modulated the effects of expression on the identity aftereffect. 
Experiment 1
In Experiment 1, we asked whether there is evidence for expression dependence within neural representations of identity for novel faces. Morph series were created between two anonymous identities with the same expression. Images selected from the middle range of these morph series display a recognizable expression but are ambiguous in their identity. Subjects adapt to one of the two identities that were used to create the morph series and are then asked to judge which identity an ambiguous morphed test face most resembled. Adaptation to the first identity will increase the probability that these ambiguous test faces will be identified as similar to the second identity, while adaptation to the second identity will decrease the probability that ambiguous test faces will be identified as similar to it. The difference in these two probabilities is the measure of the identity aftereffect. By manipulating the adapting faces, but using the same test faces across experimental conditions, we can determine which aspects of the adapting faces affect the generation of the identity aftereffect. 
Experiment 1 consisted of three experimental conditions. (1) The congruent-expression condition used, as adapting stimuli, the same images used to construct the morphed test faces. Thus, the facial expressions in the adapting faces and the test faces are the same. (2) The congruent-expression/ different-image condition used as adapting stimuli different images of the same faces used to create the morphed test stimuli. However, these different images were still of the same individuals with the same expression. If the congruent-expression/different-image condition produces an aftereffect equal to that in the congruent-expression condition, this would ensure that the aftereffect is not due to properties specific to a particular image, but due to a specific face. (3) The incongruent-expression condition used, as adapting stimuli, faces of the same people but with a different expression than that in the images used to create the morphed test faces. A significant aftereffect in this condition would be consistent with an expression-invariant representation of identity. A significant reduction in the aftereffect compared to the congruent-expression condition would be consistent with the existence of a separate expression-dependent representation of identity as well. 
Methods
Subjects
Ten subjects (7 female; age = 29.1 ± 5.5 years) participated in Experiment 1. All subjects, excluding one (CJF), were naïve to the purpose of the experiment. Subjects had normal or corrected-to-normal vision and could clearly see all faces and read on-screen text at the testing distance of 57 cm. The protocol was approved by the institutional review boards of Vancouver General Hospital and the University of British Columbia. All subjects gave informed consent, and the experiment was conducted in accordance with the principles of the Declaration of Helsinki. 
Stimuli
Two female photographic subjects (F01 and F22) were selected from the Karolinska Database of Emotional Faces (Lundqvist & Litton, 1998). A and B versions of these individuals displaying anger and fear were used. Background, hair, ears, and neck were blacked out using Adobe Photoshop CS2 9.0.2 (www.adobe.com). Facial features and external jaw contour were preserved using this method. Distinguishing marks, such as moles, were removed using the Spot Healing Brush Tool. Images were then cropped to ensure that all faces were centrally located within the image frame. Cropped images were resized and displayed at a standard width of 400 pixels (10.8°). Luminance and contrast were visually adjusted to be comparable across all images. 
Images of F01 and F22 with similar expressions (e.g., F01/Angry-A with F22/Angry-A) were paired to create morph series with Fantamorph 3.0 ( www.fantamorph.com). A morph series was created for each of the two versions (A and B images) of each facial expression (angry and afraid). Each of the four morph series contained 41 images, with each image representing an equal 2.5% step along the morph series (i.e., 100/0%, 97.5/2.5%,…, 0/100%). The thirteen middle images (65/35% to 35/65%) were used in the experiment as the test faces with ambiguous identity, while the unmorphed original images were used as the adapting faces. 
Half of the subjects were assigned the two A-series for morphed test images, and half were assigned the two B-series. The 13 test faces from each of the two assigned morph series (one for angry and one for afraid images) were used as test images in all experimental conditions for that subject. Adapting stimuli were manipulated between experimental conditions: 
  1.  
    The congruent-expression condition used, as adapting stimuli, the same (unmorphed) images that were used to generate the morphed test images. Thus, the A-series group of subjects (1) adapted to the Angry-A images of F01 or F22, before seeing the Angry-A test images that morphed identity between F01 and F22, and (2) adapted to the Afraid-A images of F01 or F22, before seeing the Afraid-A test images of identity morphs between F01 and F22.
  2.  
    The congruent-expression/different-image condition used as adapting stimuli the unmorphed images used to create the alternative series of morphed test images (which the subject never saw). Thus, the A-series group of subjects (1) adapted to the Angry-B images of F01 or F22, before seeing the Angry-A test images that morphed identity between F01 and F22, and (2) adapted to the Afraid-B images of F01 or F22, before seeing the Afraid-A test images of identity morphs between F01 and F22.
  3.  
    The incongruent-expression condition used as adapting stimuli the images used to create the morphed test faces with the other expression. Thus, the A-series group of subjects (1) adapted to the Angry-A images of F01 or F22, before seeing the Afraid-A test images that morphed identity between F01 and F22, and (2) adapted to the Afraid-A images of F01 or F22, before seeing the Angry-A test images of identity morphs between F01 and F22.
As a result, the incongruent-expression condition used the same adapting stimuli and same morphed test faces as the congruent-expression condition. The critical difference is that the pairing of adapting and test stimuli was switched. This aspect of experimental design controls within subjects for any variation in the strength of the adapting power of specific images. The use of the A-series of angry and afraid images for half the subjects and the B-series of angry and afraid images for the other half allowed us to balance across subjects the adapting and the test stimuli between the congruent-expression and the congruent-expression/different-image conditions. 
In the experimental trials, a choice screen was displayed after the presentation of each morphed test face. Each choice screen displayed the two unmorphed identities (F01 and F22) used to create the morph series from which the test face was chosen, with the left/right location of F01 versus F22 randomized across trials. Subjects performed a two-alternative forced-choice task and indicated which identity the morphed test face most resembled with a key press. 
Apparatus
Experiment 1 was designed using Superlab Pro 2.0.4 ( www.cedrus.com) and displayed on an HP Compaq nx9600 notebook with a 17-in. wide-screen monitor. Subjects viewed these stimuli from approximately 57 cm viewing distance and in standard dim room lighting. 
Procedure
To familiarize them with the experimental procedure, we first gave the subjects a short practice version of the experiment made from two other faces. This practice block consisted of 6 trials and was repeated if subjects failed to understand the instructions. Following the practice block, subjects were shown images of F01 and F22 with neutral expressions. They were told that they would be making judgments on facial images morphed between these two individuals and that they were to make their best guess as to whom the morphed face most resembled. 
The experiment consisted of three blocks, one for each experimental condition, presented in a randomized order to each subject. Each block was comprised of the two morph series assigned to that subject and the 4 adapting stimuli appropriate for that experimental condition. Each adapting stimulus was seen once before each of its 13 respective test stimuli for a total of 52 trials per block and 156 trials in total. Blocks were separated by a short rest break. 
Within each block, a trial began with 5 s of adaptation to one of the four possible adapting stimuli. Subjects were told to attend to the face on the screen but not to fixate on a single location. The adapting stimulus was followed by a 50 ms mask (a random arrangement of black and white pixels) to reduce apparent motion effects and then a 300 ms morphed test face. Following the test face, a choice screen was displayed and remained on-screen until subjects indicated their response ( Figure 1). A 500 ms blank screen acted as the inter-trial interval. This trial sequence is identical to the one used in our previous study (Fox & Barton, 2007), with timing parameters based on prior studies of the dynamics of face adaptation (Leopold, Rhodes, Muller, & Jeffery, 2005). 
Figure 1
 
An example of one experimental trial. Images shown are taken from the famous familiar congruent-expression condition in Experiment 2. Each trial began with a 5 s presentation of an adapting stimulus. This adaptation was followed with a short mask (50 ms) to disrupt any apparent motion effects. An identity-ambiguous test stimulus was then presented for 300 ms. This was followed by one of two possible choice screens, and the subject was asked to choose the identity that most closely resembled the previously viewed test stimulus. The different pairings of adapting and test stimuli created the various experimental conditions.
Figure 1
 
An example of one experimental trial. Images shown are taken from the famous familiar congruent-expression condition in Experiment 2. Each trial began with a 5 s presentation of an adapting stimulus. This adaptation was followed with a short mask (50 ms) to disrupt any apparent motion effects. An identity-ambiguous test stimulus was then presented for 300 ms. This was followed by one of two possible choice screens, and the subject was asked to choose the identity that most closely resembled the previously viewed test stimulus. The different pairings of adapting and test stimuli created the various experimental conditions.
Analysis
For each adapting stimulus, we calculated a response score. This was calculated by assigning a 0 or 1 to the two possible identity choices and averaging this value across the 13 test stimuli associated with that adapting stimulus (Fox & Barton, 2007). All 13 test stimuli were taken from the mid-range of the morph series, placing them on the slope of the psychophysical sigmoid curve and ensuring that they were perceived as having an ambiguous identity. As each of the 13 test stimuli was presented only once in each condition, morph level was not considered as a factor; rather, the response score averaging the data for all 13 stimuli was used for all statistical analyses. For illustrative purposes, we also calculated the mean difference in response scores between pairs of adapting stimuli (e.g., response score after adapting to F01-Angry minus response score after adapting to F22-Angry), which is an index of the adaptation effect. Response scores were entered into a univariate general linear model (GLM) with condition (congruent-expression, congruent-expression/different-image, incongruent-expression), adapting-face identity (F01, F22), and adapting-face expression (angry, afraid) as fixed factors and subject as a random factor. Post hoc linear contrasts were performed to examine any significant effects. All statistical analyses were performed on SPSS 14.0 software (www.spss.com). Significance levels were set at α < 0.05. 
Results
The GLM revealed a significant main effect of adapting-face identity ( F(1,9) = 24.54; p < 0.005) indicating a robust identity aftereffect. Post hoc linear contrasts showed that significant identity aftereffects were generated in all conditions (congruent-expression: t(19) = 3.71, p < 0.005; congruent-expression/different-image: t(19) = 3.09, p < 0.01; incongruent-expression: t(19) = 4.15, p < 0.005). Other main effects were not significant. We observed a significant interaction between adapting-face identity and adapting-face expression ( F(1,9) = 6.72; p < 0.05), with afraid faces producing a 10% larger identity aftereffect (mean difference score ± SEM; 0.24 ± 0.04) than angry faces (0.14 ± 0.04). No other interactions were significant. The lack of a significant interaction between condition and adapting-face identity ( F(2,18) = 0.34; p > 0.5) indicates that there was no difference in the identity aftereffect elicited by the three experimental conditions ( Figure 2). Changing the adapting image in the congruent-expression/different-image condition did not reduce the identity aftereffect, indicating that the identity aftereffect is not dependent on the specific image used to create the morphed test faces. The fact that we obtained an aftereffect even though the adapting and test stimuli had different expressions in the incongruent-expression condition is consistent with adaptation of an expression-invariant representation of identity. The fact that this aftereffect is not significantly less than the aftereffect in the congruent-expression condition suggests little or no contribution to adaptation from an expression-dependent representation of identity. 
Figure 2
 
Experiment 1. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. Significant differences in response score following adaptation to F01 versus adaptation to F22 represent a significant identity aftereffect for that experimental condition. (B) The mean difference in response scores (a quantitative index of the aftereffect) is presented for each experimental condition. Identity aftereffects are found for all three experimental conditions: They are not affected by a change in the image used for the adapting stimulus, even if the expression in the adapting stimulus is no longer congruent with that of the test stimuli. This suggests that, for novel faces, the identity aftereffect is not image specific and is also invariant across changes in facial expression.
Figure 2
 
Experiment 1. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. Significant differences in response score following adaptation to F01 versus adaptation to F22 represent a significant identity aftereffect for that experimental condition. (B) The mean difference in response scores (a quantitative index of the aftereffect) is presented for each experimental condition. Identity aftereffects are found for all three experimental conditions: They are not affected by a change in the image used for the adapting stimulus, even if the expression in the adapting stimulus is no longer congruent with that of the test stimuli. This suggests that, for novel faces, the identity aftereffect is not image specific and is also invariant across changes in facial expression.
Comment
Experiment 1 used a very similar methodology to our previous study (Fox & Barton, 2007), which examined the influence of identity on adaptation for facial expression. That earlier study showed that the expression aftereffect was much larger when the identities of adapting and test faces were congruent than when these identities were incongruent (Fox & Barton, 2007). These results suggested a hierarchical structure underlying facial expression perception, with identity-dependent representations of expression providing input to identity-invariant representations of expression (Fox & Barton, 2007), analogous to neural networks that model the emergence of viewpoint-invariant from view-specific representations of faces (Rosen, 2003). 
The results of the present Experiment 1 are different. These data do not provide evidence of a similar pattern of expression dependence within representations of facial identity. Aftereffects are not modulated by the congruency of facial expression between the adapting stimuli and the test faces. At the very least, if such expression-dependent representations of identity do exist, their contribution to adaptation is very weak compared to that of expression-invariant representations. 
One possible reason for such weak expression-dependent representations is that the faces we used to probe for identity aftereffects in Experiment 1 were novel to the subjects. Other groups have suggested that expression effects in identity processing may vary with the familiarity of the face (Ganel, Goshen-Gottstein, & Ganel, 2004; Kaufmann & Schweinberger, 2004). To test the possibility that expression-modulated aftereffects might emerge with more familiar faces, which may have stronger identity representations than novel faces, we performed a second experiment. 
Experiment 2
In Experiment 2, we used pairs of faces that differed in their level of familiarity. As in Experiment 1, for each face pair we created adaptation trials with congruent-expression and incongruent-expression conditions. The first level of facial familiarity used an unnamed novel face pair. These novel faces were different individuals than those used in Experiment 1 but were also not familiar to the subjects prior to testing; hence, it was designed to replicate the findings of Experiment 1. The second level of facial familiarity also consisted of a novel face pair; however, in the days preceding testing, subjects were shown these faces, which were given arbitrary names, and were asked to memorize them and their names. This named novel pair was thus recently but minimally familiar to subjects. The third level of facial familiarity used a famous familiar face pair, which consisted of two celebrity faces. [Previous work showing that the effects of expression on identity recognition tasks vary with familiarity used similar comparisons between novel and celebrity faces (Ganel et al., 2004; Kaufmann & Schweinberger, 2004)]. The fourth level of facial familiarity used a personally familiar face pair. Some studies suggest that the representations of personally familiar faces may differ from those of celebrities (Herzmann, Schweinberger, Sommer, & Jentzsch, 2004; Kloth et al., 2006), possibly because we experience the faces of those in our daily lives in a wider dynamic range (of viewpoint, expression, gaze, etc.) than the faces of people in the news, who may be portrayed in more stereotyped views and situations. By using an array of familiarity levels, this experiment will (1) determine more comprehensively whether expression dependence of identity representations is mediated by familiarity and (2) identify the level of familiarity at which it emerges, specifically whether a name, semantic knowledge, or a personal experience is the key to the formation of expression-dependent representations. 
Methods
Subjects
Twelve subjects participated in Experiment 2 (7 female; age = 29 ± 4.97 years). Eight subjects had previously participated in Experiment 1 (including CJF), and four subjects were newly recruited for Experiment 2
Stimuli
Due to the limited availability of celebrity images displaying expressions of anger or fear in viewpoints, lighting, and resolution suitable for morphing, we chose to use happy and neutral faces in the present design. Happy faces were defined as frontal-view faces with open-mouth smiles, and neutral faces were defined as frontal-view faces with closed mouths and horizontal lips. Each familiarity level consisted of two female faces and two pictures of each face (one happy and one neutral). Unnamed novel faces were two female faces (F15 and F24) selected from the Karolinska Database of Emotional Faces (Lundqvist & Litton, 1998). Named novel faces were two different female faces (F08 and F25) selected from the Karolinska Database. Famous familiar faces were two female celebrities (Cameron Diaz and Reese Witherspoon) gathered from the Internet. Personally familiar faces were two female lab members who were familiar to all subjects, having been encountered on a nearly daily basis for at least 3 months by all subjects. Eye color was consistent within face pairs. All faces were processed and sized using Adobe Photoshop CS2 9.0.2 as outlined in Experiment 1. Two morph series were made for each familiarity level. Each morph series was made between the two individuals displaying the same expression resulting in a happy and neutral morph series for each familiarity level. Again, the unmorphed endpoints of each morph series were taken as adapting stimuli, while the central 13 morphed images were taken as identity-ambiguous test faces. The congruent-expression conditions consisted of adapting and test stimuli taken from the same morph series. The incongruent-expression conditions consisted of adapting stimuli from one morph series and test stimuli from the other morph series. 
Apparatus
Experiment 2 was designed and presented as described in Experiment 1
Procedure
All subjects, as in Experiment 1, first participated in a short practice block to ensure they understood the task. Four experimental blocks (unnamed novel, named novel, famous familiar, personally familiar) were presented to subjects in a random order. Before each block subjects were shown unaltered images of the two identities that would be used in that experimental block. They were told that they would be making judgments on morphed faces between these two individuals and that they were to make their best guess as to whom the morphed face most resembled. Trials were organized as described in Experiment 1
We combined congruent-expression and incongruent-expression conditions within each block. Each block was comprised of 4 adapting stimuli (2 individuals displaying 2 different expressions) and two morph series (one for each expression). Each adapting stimulus was seen once before each of the 13 test stimuli taken from its morph series (congruent-expression) and once before each of the 13 test stimuli taken from the morph series with the other expression (incongruent-expression). This resulted in 104 trials per block and 416 trials in total. Blocks were separated by a short rest break. 
Analysis
Response and mean difference scores were calculated as described in Experiment 1. Response scores were entered into a univariate GLM with familiarity-level (unnamed novel, named novel, famous familiar, personally familiar), adapting-face identity (Identity-1, Identity-2), adapting-face-expression (happy, neutral), and expression-congruency ( expression-congruent, expression-incongruent) as fixed factors and subject as a random factor. Post hoc linear contrasts were performed to examine any significant effects, with significance levels set at α < 0.05. 
Results
The GLM revealed a significant main effect of adapting-face identity ( F(1,11) = 37.06; p < 0.001). Linear contrasts showed that both congruent-expression and incongruent-expression conditions produced significant identity aftereffects in all four levels of facial familiarity ( p < 0.001; all contrasts; Figure 3). A significant main effect of familiarity-level was observed ( F(3,33) = 4.08; p < 0.05); however, this was modified by a significant three-way interaction between familiarity-level, adapting-face-expression, and expression-congruency ( F(3,33) = 8.42; p < 0.001). As this interaction did not involve the factor of adapting-face identity, it does not indicate a difference in magnitude of the identity aftereffect across these interacting factors. Rather, it indicates different thresholds for the various morph series about which these aftereffects occur. All other main effects and interactions were not significant. While facial familiarity has been shown to increase the magnitude of identity aftereffects (Jiang, Blanz, & O'Toole, 2007), we only see a trend for an interaction between familiarity-level and adapting-face identity (F(3,33) = 2.42; p = 0.08), with personally familiar faces (mean difference score ±SEM; 0.37 ± 0.04) showing larger identity aftereffects than famous familiar (0.22 ± 0.03), named novel (0.30 ± 0.04), or unnamed novel faces (0.28 ± 0.04). 
The two key findings of Experiment 2 were, first, the lack of an interaction between adapting-face identity and expression-congruency ( F(1,11) = 2.91, p > 0.1), reproducing the finding of Experiment 1, that identity aftereffects are not affected by expression; and second, the lack of a three-way interaction between adapting-face identity, familiarity-level, and expression-congruency ( F(3,33) = 1.97, p > 0.1; Figure 3). This indicates that the lack of modulation of the identity aftereffect by the congruency of facial expression between adapting and test face was not modulated by the familiarity of the faces involved. 
Figure 3
 
Experiment 2. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. (B) The mean difference in response scores (a quantitative index of the aftereffect) is also presented for each experimental condition. Significant identity aftereffects are elicited in each experimental condition. The unnamed novel conditions, using different novel faces and different facial expressions, replicate the results reported for Experiment 1, showing that the identity aftereffect for novel faces is invariant to changes in facial expression. This invariance to facial expression is also demonstrated in the three other experimental conditions (named novel, famous familiar, and personally familiar) representing different levels of facial familiarity. The magnitude of the identity aftereffect is not modulated by the familiarity of the faces used.
Figure 3
 
Experiment 2. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. (B) The mean difference in response scores (a quantitative index of the aftereffect) is also presented for each experimental condition. Significant identity aftereffects are elicited in each experimental condition. The unnamed novel conditions, using different novel faces and different facial expressions, replicate the results reported for Experiment 1, showing that the identity aftereffect for novel faces is invariant to changes in facial expression. This invariance to facial expression is also demonstrated in the three other experimental conditions (named novel, famous familiar, and personally familiar) representing different levels of facial familiarity. The magnitude of the identity aftereffect is not modulated by the familiarity of the faces used.
Comment
Despite the use of different stimuli, different expression pairs (angry–afraid versus happy–neutral), and different arrangements (randomly mixed versus blocked) of expression-congruent and expression-incongruent trials, the unnamed novel face condition of Experiment 2 replicated the results of Experiment 1, with perceptual aftereffects of a similar magnitude of around 20–25%. This reinforces the conclusion that the identity aftereffect with novel faces is not reduced when facial expression is changed in the adapting stimuli. The failure of expression to modify the identity aftereffect was reproduced in all four levels of facial familiarity, suggesting further that identity representations are expression-invariant at all levels of facial familiarity. 
When contrasted with the results of our previous study (Fox & Barton, 2007), these findings suggest an interesting asymmetry between representations of facial identity and expression: While expression aftereffects are reduced when identity differs between adapting and test stimuli, suggesting some dependence on identity, identity aftereffects are not affected when expression differs, suggesting complete expression invariance. 
What accounts for this difference? One possibility to consider is the following. It may be that in a representational “face space,” adaptation of the neural representation for a specific face also causes some partial adaptation of faces that are highly similar and share many characteristics with that adapted face. The question then is whether two images of different expressions in the same person are more similar than two images of different people with the same expression. If so, this might account for why changing expression does not reduce identity aftereffects while changing identity does reduce expression aftereffects. We performed Experiment 3 to determine if faces differing in expression but not identity were more similar than faces differing in identity but not expression. 
Experiment 3
Aftereffects in general are modulated by the similarity between the adapting stimulus and the test stimulus. For example, in the classical size aftereffect, after adapting to a test grating pattern of medium spatial frequency, a higher frequency grating will be perceived as an even higher frequency, and a lower spatial frequency grating will appear to be even lower. However, this effect only occurs when the test pattern is within 2 octaves of the adapting frequency on either side. If the test pattern is too dissimilar to the adapting pattern, the aftereffect disappears (Blakemore & Sutton, 1969). 
Similarly for face adaptation, one would expect that aftereffects would disappear or become reduced if the adapting and test faces are too dissimilar. Our previous study of the expression aftereffect does in fact show this pattern (Fox & Barton, 2007); the reduced aftereffect seen in the different identity condition may simply be due to increased dissimilarity between adapting and test images. Why then is the magnitude of the identity aftereffect not reduced when adapting and test faces have different expressions compared to when they have the same expression? Are the physical or perceived changes in the same face displaying two different expressions too small to have an effect on adaptation? 
We explored this possibility using two parallel routes: first by estimating the perceptual distances between face pairs, and second by estimating the physical distances between them. We measured discrimination thresholds for human observers as an indicator of perceptual distances. We compared the contrast thresholds for discriminating pairs of faces (same identity) showing two different expressions ( expression-set) to the thresholds for discriminating pairs of faces (same expression) of two different individuals ( identity-set). To estimate physical distances between face pairs, we measured the discrimination thresholds of an ideal observer using the same sets of stimuli. 
Methods
Subjects
Two subjects (CJF and IO) participated in Experiment 3 (1 female; age = 30 ± 2.8 years). Both subjects were experienced psychophysical observers with normal or corrected-to-normal vision. 
Stimuli
Each stimulus set consisted of 12 image pairs. Importantly, the images comprising the expression-set were the adapting stimuli used in Experiments 1 and 2 of the present study, and the images comprising the identity-set were the adapting stimuli used in our previous study (Fox & Barton, 2007). Image pairs were not the two endpoints of a particular morph series but were corresponding endpoints of two different morph series. For example, the images used in Experiment 1 would be paired as follows: F01/AngryA with F01/AfraidA, F22/AngryA with F22/AfraidA, F01/AngryB with F01/AfraidB, and F22/AngryB with F22/AfraidB. In this way, we were able to estimate the level of similarity between images used in the congruent and incongruent conditions and thereby determine whether the level of similarity could explain the difference between congruent- and incongruent-condition aftereffects seen in these experimental conditions. 
Image pairs in the expression-set showed one individual displaying two expressions, either (a) an angry and an afraid expression (as described in Experiment 1) or (b) a neutral and a happy expression (as described in Experiment 2). Image pairs in the identity-set showed two individuals displaying the same expression, that is, fear, anger, disgust, happiness, sadness, or surprise (as described by Fox & Barton, 2007). 
All stimuli were 512 × 512 pixel in size, which corresponded to 8.5° × 8.5° visual angle at the viewing distance of 107 cm. The faces were seen through an oval mask that was 254 × 360 pixels at the central axes. Thus, the face-width was approximately 4.2°. 
Stimuli were generated using Matlab 7.0, Adobe Photoshop 6.0, and Adobe Illustrator 10 as follows. Digital images of the face stimuli were first converted to grayscale. Then the luminance values were scaled to a range of 0–1. An oval mask was overlaid on the face images, and the luminance value outside the oval was set to 0.5 (mid-gray). The luminance of the face image, seen through the mask, was normalized to have mean luminance of 0.5 (mid-gray) and standard deviation of 1, such that all face images had equal starting contrast. 
Apparatus
The experiment was run on a computer equipped with a Cambridge Research Systems VSG 2/3 36 MB frame buffer. Stimuli were displayed on a SONY Trinitron 17-in. monitor (model GDM-200 PS) at 1024 × 768 resolution. The stimuli luminance values were linearized using an OptiCAL photometer (Model OP200-E) by Cambridge Research Systems via software that generates and saves a gamma-correction look-up table. Mean luminance was 40 cd/m 2. The viewing distance was 107 cm. 
Procedure
On each trial, the subject first viewed a 500 ms fixation cross and then one of two possible face images for 150 ms. This was followed by a choice screen showing the two possible images, which was displayed until the subject completed the two-alternative forced-choice task. Subjects indicated their response with a key press. Feedback was provided in the form of a single click for a correct response and a double click for an incorrect response. Trials were blocked, with one image pair tested within each block. The order of the 24 blocks, corresponding to the 24 face pairs, was randomized for each subject. 
The experimental procedure was coded in Matlab 7.0 using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) and the CRS VSG Toolbox for Matlab. Discrimination thresholds at 82% correct were measured with two interleaved staircases that lasted 40 trials each, using the Quest procedure (Watson & Pelli, 1983). 
Analysis
The discrimination threshold estimates for each face pair were obtained by averaging the individual threshold estimates from the two interleaved staircases (i.e., total of 80 trials per threshold). The discrimination thresholds were then submitted to a repeated measures ANOVA with stimulus set (expression-set, identity-set) as a within subjects factor. 
Ideal observer
We ran an ideal observer simulation of this two-alternative forced-choice discrimination using the same sets of stimuli on which the human observers were tested. On each trial, one of two possible face images, F 1 or F 2, were chosen at random as target stimulus, S, and a zero mean unit variance Gaussian white noise, N, was added to this image at the appropriate contrast, c, as follows: S = F i , c + N, i = 1 or 2, where F i,c denotes face image i at contrast c. The contrast was determined by the staircase procedure at each trial. The value of the noise variance was arbitrarily chosen to be 1, as we were not looking for a specific level of threshold, but rather any difference between the thresholds for the expression and the identity sets. The target stimulus contrast on each trial, as well as the statistics of the noise, was available to the ideal observer. The response of the ideal observer was based on a minimum distance rule:
min i
∑( SF i,c) 2. This is equivalent to Bayesian a posteriori maximization as both face images were selected as target stimulus with equal probability (Tjan, Braje, Legge, & Kersten, 1995). 
Results
The repeated measures ANOVA using discrimination thresholds obtained from the two human observers showed no difference between the identity-set and expression-set ( F(1,1) = 3.61, p > 0.3; Figure 4). Thus, on a perceptual level, changes in expression were not harder to discriminate than changes in identity. Similarly, a one-way ANOVA on the discrimination thresholds obtained from the ideal observer shows no difference between the identity-set and the expression-set ( F(1,22) = 2.71, p > 0.35; Figure 4). Image pairs that differ in identity but not expression are as physically dissimilar as image pairs that differ in expression but not identity. 
Figure 4
 
Results from Experiment 3. Mean discrimination thresholds were calculated by averaging the thresholds obtained from the 12 identity-set pairs and 12 expression-set pairs separately. These mean discrimination thresholds (± SEM) are plotted for the two human observers and the ideal observer.
Figure 4
 
Results from Experiment 3. Mean discrimination thresholds were calculated by averaging the thresholds obtained from the 12 identity-set pairs and 12 expression-set pairs separately. These mean discrimination thresholds (± SEM) are plotted for the two human observers and the ideal observer.
Comment
For the stimuli used in this and our prior report (Fox & Barton, 2007), the differences between images of the same expression in different people were no greater than the differences between images of different expressions in the same individual, in either physical (as measured by the ideal observer) or human perceptual terms (as measured in the two human subjects). Therefore, this does not provide support for a proposal that the former are perceptually closer in face space than the latter, a proposal that might provide a simple explanation of the difference in the degree of invariance of identity versus expression aftereffects. 
General discussion
The results of Experiments 1 and 2 are consistent: Adaptation to identity transfers fully over changes in expression, regardless of the level of familiarity with the person depicted. This contrasts with our previous work, which showed larger expression aftereffects when adapting and test faces had congruent identities than when adapting and test faces had incongruent identities (Fox & Barton, 2007). While the results for expression adaptation suggested a possible hierarchical construction of expression representations, with identity-dependent representations feeding into more abstract identity-invariant representations of expression (Fox & Barton, 2007), the current results for identity aftereffects suggest that adaptation for identity occurs primarily if not solely in an expression-invariant representation. 
This suggests an asymmetric construction of identity and expression representations within the human visual system ( Figure 5). Such an asymmetry in the relationship between identity and expression has also been demonstrated through earlier work using Garner's interference task; irrelevant changes in facial identity strongly retard the speed of facial expression discriminations, while irrelevant changes in facial expression do not affect the speed of facial identity discriminations (Baudouin, Martin, Tiberghien, Verlut, & Franck, 2002; Schweinberger, Burton, & Kelly, 1999; Schweinberger & Soukup, 1998). Other experiments examining the interaction between face gender (another temporally invariant feature of faces) and face expression showed similar asymmetry; gender interfered with an expression discrimination task, but expression did not interfere with a gender discrimination task (Atkinson, Tipples, Burt, & Young, 2005). Furthermore, adaptation studies using functional magnetic resonance imaging have shown that the fusiform face area, postulated to be a key cortical region in the processing of identity, is sensitive to changes in facial identity but not expression, whereas the superior temporal sulcus, postulated to be a key cortical region in the processing of expression, is sensitive to changes in both facial identity and facial expression (Winston et al., 2004). 
Figure 5
 
A schematic summary of results for the identity and expression aftereffects (Fox & Barton, 2007). The different pattern of results found in these two studies, using very similar methodologies, suggests an asymmetric construction within neural representations associated with face perception. The results are consistent with neural representations of expression that show both identity-dependent and identity-invariant representations (Fox & Barton, 2007), while the current data provide evidence for only an expression-invariant neural representation of identity.
Figure 5
 
A schematic summary of results for the identity and expression aftereffects (Fox & Barton, 2007). The different pattern of results found in these two studies, using very similar methodologies, suggests an asymmetric construction within neural representations associated with face perception. The results are consistent with neural representations of expression that show both identity-dependent and identity-invariant representations (Fox & Barton, 2007), while the current data provide evidence for only an expression-invariant neural representation of identity.
What might generate such an asymmetry in aftereffects? One potential explanation could be related to the degree of similarity between congruent and incongruent images. One might expect that two images of different people with the same expression would be more dissimilar than two images of the same person with different expressions. Since adaptation to one face reduces responses to other nearby representations in face space (Anderson & Wilson, 2005; Loffler, Yourganov, Wilkinson, & Wilson, 2005), it may be that identity aftereffects generalize more across expression changes, which may be closer together in face space, than expression aftereffects generalize across identity changes, which may be farther apart in face space. However, in Experiment 3, we found no support for this. Both human and ideal observers showed no difference in the contrast thresholds for discriminating between facial expressions in the same person and for discriminating between different facial identities with the same expression. Thus, there is no corresponding asymmetry in either the perceptual or physical similarity of faces differing in identity versus expression to account for the asymmetry in the dependency of aftereffects. 
Beyond perceptual and physical similarity, one may speculate upon other reasons for such an asymmetry in the relationship between expression and identity. Expressions and identity may differ in the range of representations involved. While the human visual system encodes thousands of facial identities, some have argued that the many subtle variations of expression can be reduced to a small, finite number of categories (Ekman & Friesen, 1971; Ekman, Sorenson, & Friesen, 1969). A dependent layer of representation may be more likely if the range of modulating inputs is large than if it is small. 
Behavioral reasons for this asymmetry can also be advanced. It is important that the perception of face identity is impervious to changes in facial expression, so that one can continue to recognize an individual regardless of their emotional state. However, accurate perception of emotional state may require modification of expression perception by the individual's identity. The “structural reference theory” of Ganel et al. (2004) proposes that certain faces have structural properties that bias towards certain facial expressions. Learning the structure of these faces leads to compensatory modifications of judgments about the individual's emotional state. In support of this theory, changes in facial configuration have been shown to influence the perception of facial expression (Martinez & Neth, 2007). Therefore, precise perception of facial expression may require referencing to identity-dependent representations of expression, in addition to generalizations made possible by identity-invariant representations. 
A modulation of interactions between facial identity and expression by familiarity had been suggested by two earlier studies (Ganel et al., 2004; Kaufmann & Schweinberger, 2004). Using Garner's interference task one study replicated the finding that, with novel faces, irrelevant changes in expression had no effect on the speed of identity discriminations, while irrelevant changes in identity slowed expression discriminations (Ganel et al., 2004). However, irrelevant changes in expression did increase reaction times for identity discrimination when famous faces were used (Ganel et al., 2004). Inspection of their data, though, shows that interference is still asymmetric, with smaller interference effects for expression changes during identity discrimination than for identity changes during expression discrimination (Ganel et al., 2004). The second study measured reaction times during an identification task (Kaufmann & Schweinberger, 2004). Images of celebrities were more rapidly identified when they displayed a slightly happy expression, but the effect of expression was not observed with faces only seen in the context of the experiment (Kaufmann & Schweinberger, 2004). The authors suggested that representations of celebrity identities may have an attached stereotypical expression (Kaufmann & Schweinberger, 2004); an expression-dependent representation of identity. Our studies fail to show any significant impact of familiarity on adaptation. It may be the adaptation methods we employ probe slightly different physiologic events than those probed with interference or recognition paradigms. For example, while interference may stem from interactions between representations in the visual system, adaptation effects may probe the variance of those representations. 
In summary, our experiments demonstrate expression invariance of the identity aftereffect, regardless of the level of the observer's familiarity with the faces used, and suggest that the neural representations underlying the perception of the identities of both novel and famous faces are expression independent. This contrasts with our earlier work using a similar adaptation paradigm, which provided evidence consistent with both identity-dependent and identity-independent representations of facial expression (Fox & Barton, 2007). Together, these data suggest an asymmetric construction of identity and expression representations. Expression-invariant representations of identity can be achieved in some perceptual models (Bronstein, Bronstein, & Kimmel, 2007), and our results may point to important ways in which the encoded representations of expression and identity differ in the human visual system. 
Acknowledgments
CJF was supported by a Canadian Institutes for Health Research Canada Graduate Scholarship Doctoral Research Award and a Michael Smith Foundation for Health Research Senior Graduate Studentship. JJSB was supported by a Canada Research Chair and a Senior Scholarship from the Michael Smith Foundation for Health Research. This work was supported by operating grants from the Canadian Institutes of Health Research (MOP-77615) and the National Institute for Mental Health (1R01 MH069898). 
Portions of this work were first presented at the Annual Meeting of the Vision Sciences Society, Sarasota Florida, May 2007. 
Commercial relationships: none. 
Corresponding author: Christopher J. Fox. 
Email: cjfox@interchange.ubc.ca. 
Address: 2550 Willow St., Ophthalmology Research, 3rd Floor, Vancouver, BC V5Z 3N9 Canada. 
References
Anderson, N. D. Wilson, H. R. (2005). The nature of synthetic face adaptation. Vision Research, 45, 1815–1828. [PubMed] [CrossRef] [PubMed]
Atkinson, A. P. Tipples, J. Burt, D. M. Young, A. W. (2005). Asymmetric interference between sex and emotion in face perception. Perception & Psychophysics, 67, 1199–1213. [PubMed] [CrossRef] [PubMed]
Baudouin, J. Y. Martin, F. Tiberghien, G. Verlut, I. Franck, N. (2002). Selective attention to facial emotion and identity in schizophrenia. Neuropsychologia, 40, 503–511. [PubMed] [CrossRef] [PubMed]
Blakemore, C. Sutton, P. (1969). Size adaptation: A new aftereffect. Science, 166, 245–247. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Bronstein, A. M. Bronstein, M. M. Kimmel, R. (2007). Expression-invariant representations of faces. IEEE Transactions on Image Processing, 16, 188–197. [PubMed] [CrossRef] [PubMed]
Bruce, V. Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327. [PubMed] [CrossRef] [PubMed]
Butler, A. Oruc, I. Fox, C. Barton, J. (2008). Factors contributing to the adaptation aftereffects of facial expression. Brain Research, 1191, 116–126. [PubMed] [CrossRef] [PubMed]
Calder, A. J. Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews, Neuroscience, 6, 641–651. [PubMed] [CrossRef]
de Gelder, B. Frissen, I. Barton, J. Hadjikhani, N. (2003). A modulatory role for facial expressions in prosopagnosia. Proceedings of the National Academy of Sciences of the United States of America, 100, 13105–13110. [PubMed] [Article] [CrossRef] [PubMed]
Ekman, P. Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17, 124–129. [PubMed] [CrossRef] [PubMed]
Ekman, P. Sorenson, E. R. Friesen, W. V. (1969). Pan-cultural elements in facial displays of emotion. Science, 164, 86–88. [PubMed] [CrossRef] [PubMed]
Fox, C. J. Barton, J. J. (2007). What is adapted in face adaptation The neural representations of expression in the human visual system. Brain Research, 1127, 80–89. [PubMed] [CrossRef] [PubMed]
Ganel, T. Goshen-Gottstein, Y. Ganel, T. (2004). Effects of familiarity on the perceptual integrality of the identity and expression of faces: The parallel-route hypothesis revisited. Journal of Experimental Psychology: Human Perception and Performance, 30, 583–597. [PubMed] [CrossRef] [PubMed]
Ganel, T. Valyear, K. F. Goshen-Gottstein, Y. Goodale, M. A. (2005). The involvement of the “fusiform face area” in processing facial expression. Neuropsychologia, 43, 1645–1654. [PubMed] [CrossRef] [PubMed]
Haxby, J. V. Hoffman, E. A. Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233. [PubMed] [CrossRef] [PubMed]
Herzmann, G. Schweinberger, S. R. Sommer, W. Jentzsch, I. (2004). What's special about personally familiar faces A multimodal approach. Psychophysiology, 41, 688–701. [PubMed] [CrossRef] [PubMed]
Humphreys, K. Avidan, G. Behrmann, M. (2007). A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia. Experimental Brain Research, 176, 356–373. [PubMed] [CrossRef] [PubMed]
Jiang, F. Blanz, V. O'Toole, A. J. (2007). The role of familiarity in three-dimensional view-transferability of face identity adaptation. Vision Research, 47, 525–531. [PubMed] [CrossRef] [PubMed]
Kaufmann, J. M. Schweinberger, S. R. (2004). Expression influences the recognition of familiar faces. Perception, 33, 399–408. [PubMed] [CrossRef] [PubMed]
Kloth, N. Dobel, C. Schweinberger, S. R. Zwitserlood, P. Bölte, J. Junghöfer, M. (2006). Effects of personal familiarity on early neuromagnetic correlates of face perception. European Journal of Neuroscience, 24, 3317–3321. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. O'Toole, A. J. Vetter, T. Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [PubMed] [Article] [CrossRef] [PubMed]
Leopold, D. A. Rhodes, G. Muller, K. M. Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London B: Biological Sciences, 272, 897–904. [PubMed] [Article] [CrossRef]
Loffler, G. Yourganov, G. Wilkinson, F. Wilson, H. R. (2005). fMRI evidence for the neural representation of faces. Nature Neuroscience, 8, 1386–1390. [PubMed] [CrossRef] [PubMed]
Lundqvist, D. Litton, J. E. (1998). The Averaged Karolinska Directed Emotional Faces—AKDEF.
Martinez, A. Neth, D. (2007). Face configuration biases the perception of facial expressions [Abstract]. Journal of Vision, 7, (9):943, [CrossRef]
Palermo, R. Rhodes, G. (2007). Are you always on my mind A review of how face perception and attention interact. Neuropsychologia, 45, 75–92. [PubMed] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Rosen, E. (2003). Face representation in cortex: Studies using a simple and not so special model.
Schweinberger, S. R. Burton, A. M. Kelly, S. W. (1999). Asymmetric dependencies in perceiving identity and emotion: Experiments with morphed faces. Perception & Psychophysics, 61, 1102–1115. [PubMed] [CrossRef] [PubMed]
Schweinberger, S. R. Soukup, G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24, 1748–1765. [PubMed] [CrossRef] [PubMed]
Stephan, B. C. Breen, N. Caine, D. (2006). The recognition of emotional expression in prosopagnosia: Decoding whole and part faces. Journal of the International Neuropsychological Society, 12, 884–895. [PubMed] [CrossRef] [PubMed]
Tjan, B. S. Braje, W. L. Legge, G. E. Kersten, D. (1995). Human efficiency for recognizing 3-D objects in luminance noise. Vision Research, 35, 3053–3069. [PubMed] [CrossRef] [PubMed]
Watson, A. B. Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33, 113–120. [PubMed] [CrossRef] [PubMed]
Webster, M. A. Kaping, D. Mizokami, Y. Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428, 557–561. [PubMed] [CrossRef] [PubMed]
Winston, J. S. Henson, R. N. Fine-Goulden, M. R. Dolan, R. J. (2004). fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception. Journal of Neurophysiology, 92, 1830–1839. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
An example of one experimental trial. Images shown are taken from the famous familiar congruent-expression condition in Experiment 2. Each trial began with a 5 s presentation of an adapting stimulus. This adaptation was followed with a short mask (50 ms) to disrupt any apparent motion effects. An identity-ambiguous test stimulus was then presented for 300 ms. This was followed by one of two possible choice screens, and the subject was asked to choose the identity that most closely resembled the previously viewed test stimulus. The different pairings of adapting and test stimuli created the various experimental conditions.
Figure 1
 
An example of one experimental trial. Images shown are taken from the famous familiar congruent-expression condition in Experiment 2. Each trial began with a 5 s presentation of an adapting stimulus. This adaptation was followed with a short mask (50 ms) to disrupt any apparent motion effects. An identity-ambiguous test stimulus was then presented for 300 ms. This was followed by one of two possible choice screens, and the subject was asked to choose the identity that most closely resembled the previously viewed test stimulus. The different pairings of adapting and test stimuli created the various experimental conditions.
Figure 2
 
Experiment 1. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. Significant differences in response score following adaptation to F01 versus adaptation to F22 represent a significant identity aftereffect for that experimental condition. (B) The mean difference in response scores (a quantitative index of the aftereffect) is presented for each experimental condition. Identity aftereffects are found for all three experimental conditions: They are not affected by a change in the image used for the adapting stimulus, even if the expression in the adapting stimulus is no longer congruent with that of the test stimuli. This suggests that, for novel faces, the identity aftereffect is not image specific and is also invariant across changes in facial expression.
Figure 2
 
Experiment 1. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. Significant differences in response score following adaptation to F01 versus adaptation to F22 represent a significant identity aftereffect for that experimental condition. (B) The mean difference in response scores (a quantitative index of the aftereffect) is presented for each experimental condition. Identity aftereffects are found for all three experimental conditions: They are not affected by a change in the image used for the adapting stimulus, even if the expression in the adapting stimulus is no longer congruent with that of the test stimuli. This suggests that, for novel faces, the identity aftereffect is not image specific and is also invariant across changes in facial expression.
Figure 3
 
Experiment 2. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. (B) The mean difference in response scores (a quantitative index of the aftereffect) is also presented for each experimental condition. Significant identity aftereffects are elicited in each experimental condition. The unnamed novel conditions, using different novel faces and different facial expressions, replicate the results reported for Experiment 1, showing that the identity aftereffect for novel faces is invariant to changes in facial expression. This invariance to facial expression is also demonstrated in the three other experimental conditions (named novel, famous familiar, and personally familiar) representing different levels of facial familiarity. The magnitude of the identity aftereffect is not modulated by the familiarity of the faces used.
Figure 3
 
Experiment 2. (A) Mean response scores (± SEM) are presented, with significant differences indicated by asterisks. (B) The mean difference in response scores (a quantitative index of the aftereffect) is also presented for each experimental condition. Significant identity aftereffects are elicited in each experimental condition. The unnamed novel conditions, using different novel faces and different facial expressions, replicate the results reported for Experiment 1, showing that the identity aftereffect for novel faces is invariant to changes in facial expression. This invariance to facial expression is also demonstrated in the three other experimental conditions (named novel, famous familiar, and personally familiar) representing different levels of facial familiarity. The magnitude of the identity aftereffect is not modulated by the familiarity of the faces used.
Figure 4
 
Results from Experiment 3. Mean discrimination thresholds were calculated by averaging the thresholds obtained from the 12 identity-set pairs and 12 expression-set pairs separately. These mean discrimination thresholds (± SEM) are plotted for the two human observers and the ideal observer.
Figure 4
 
Results from Experiment 3. Mean discrimination thresholds were calculated by averaging the thresholds obtained from the 12 identity-set pairs and 12 expression-set pairs separately. These mean discrimination thresholds (± SEM) are plotted for the two human observers and the ideal observer.
Figure 5
 
A schematic summary of results for the identity and expression aftereffects (Fox & Barton, 2007). The different pattern of results found in these two studies, using very similar methodologies, suggests an asymmetric construction within neural representations associated with face perception. The results are consistent with neural representations of expression that show both identity-dependent and identity-invariant representations (Fox & Barton, 2007), while the current data provide evidence for only an expression-invariant neural representation of identity.
Figure 5
 
A schematic summary of results for the identity and expression aftereffects (Fox & Barton, 2007). The different pattern of results found in these two studies, using very similar methodologies, suggests an asymmetric construction within neural representations associated with face perception. The results are consistent with neural representations of expression that show both identity-dependent and identity-invariant representations (Fox & Barton, 2007), while the current data provide evidence for only an expression-invariant neural representation of identity.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×