All images were drawn from the Karolinska Directed Emotional Faces (KDEF) database (Lundqvist, Flykt, & Öhman,
1998). Two male identities (labeled M1 and M2 here) and 2 female identities (labeled F1 and F2) were selected for use as the test (adapter and probe) identities, on the basis that the expressions they displayed were free from acute idiosyncrasies. To produce the prototypical expression transformations that were used to reduce idiosyncratic variations in expression production, 25 male and 25 female identities were selected from the KDEF database, each displaying expressions of the six basic emotions (happy, sad, fear, anger, surprise, and disgust) together with a neutral expression. For each expression, the 50 exemplars were averaged using the PsychoMorph application (Tiddeman, Burt, & Perrett,
2001) to produce a prototypical version of that expression.
To make an overall prototype, one approach would have been to average only neutral expressions. However, the use of neutral expressions in this context could reasonably be questioned, given evidence that neutral expressions contain affective content (Kesler-West et al.,
2001; Lee, Kang, Park, Kim, & An,
2008) and can form expressions in their own right (Shah & Lewis,
2003). More importantly, at least for present purposes, the overall prototype produced from neutral expressions has certain feature characteristics (e.g., a closed mouth) that introduce distortions during the morphing procedure used to construct anti-expressions. These distortions are visible in the anti-expressions at strengths far lower than those required here. We used an alternative approach to make the overall prototype, in which we averaged across all expressions. This produced an overall prototype without a strong readily identifiable affective content (see Figure S1 in the
Supplementary materials), with the benefit that it enabled us to construct anti-expression stimuli at the higher strengths required for the current study. Note that we have used this approach previously (Skinner & Benton,
2010) and a similar approach has also been used by Cook, Matei, and Johnston (
2011).
The difference between the overall prototype and a prototypical expression was essentially a transformation that described how the facial features were deformed to produce that expression. Using PsychoMorph, these transformations could be applied to any identity to produce an image of that identity displaying a prototypical version of that expression (see
Figure 1). To make images of the four test identities display prototypical expressions, we first averaged all of the expressions for each identity to make four identity prototypes. We then applied each of the six prototypical expression transformations to each identity prototype using PsychoMorph. The veridical and prototypical expressions are shown together in
Figure 2.
To produce anti-expressions, the shape and texture from a prototypical expression for an identity were morphed (Tiddeman et al.,
2001) along a trajectory connecting that face with the relevant identity prototype, through to the other side and away from the identity prototype. The strength of the anti-expressions was modified by changing how far the morph continued along the trajectory away from the identity prototype. Morphing the face away from the identity prototype a distance equivalent to that between the prototypical expression and the identity prototype produced an anti-expression with a designated strength of 100%. Morphing the face away from the identity prototype to only half the distance between the prototypical expression and the identity prototype produced an anti-expression at 50% strength (see
Figure 3).
The identity prototype, prototypical expressions, and anti-expressions for each test identity are shown in
Figure 4. All stimuli were converted to grayscale, and the edges of each face were blurred to display mean luminance.
To ensure the prototypical expressions produced in this manner effectively conveyed the expected signals of emotion, we used a categorization task to measure the affect of each of the prototypical expressions for each identity. The results of this task (included in the
Supplementary materials) show that participants categorized the prototypical expressions with the expected emotions at levels consistent with natural expressions (Calvo & Lundqvist,
2008). In the same task, participants also categorized the prototypical anti-expressions. The results for these are also included in the
Supplementary materials but are less indicative of the naturalistic quality of these faces, as there is no expectation that natural looking anti-expressions will convey predictable, clear patterns of affect. Inspection of the prototypical anti-expressions in
Figure 4 reveals that some have a rather unnatural appearance—take the anti-expressions of happy, for example, with their narrow, down-turned mouth shapes. The less than natural appearance of certain prototypical anti-expressions does not, however, necessarily mean that using these stimuli in adaptation tasks limits the applicability of results to actual faces. Indeed, many of the adapting faces used in classic face adaptation studies have been far from natural in appearance; for example, consider those used in studies of the face distortion aftereffect (e.g., Webster & MacLin,
1999). Furthermore, the prototypical anti-expressions shown in
Figure 4 are the greatest strength anti-expressions used in this study (100%). As the strength of the anti-expressions is reduced to 50% and then 25%, their appearance becomes increasingly more natural (see
Figure 5 for examples), yet they are still effective at producing robust aftereffects (see
Results section below).