Free
Article  |   October 2011
Relative faces: Encoding of family resemblance relative to gender means in face space
Author Affiliations
Journal of Vision October 2011, Vol.11, 8. doi:10.1167/11.12.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Harry John Griffin, Peter William McOwan, Alan Johnston; Relative faces: Encoding of family resemblance relative to gender means in face space. Journal of Vision 2011;11(12):8. doi: 10.1167/11.12.8.

      Download citation file:


      © 2015 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
 

Neurophysiological (W. A. Freiwald, D. Y. Tsao, & M. S. Livingstone, 2009; D. A. Leopold, I. V. Bondar, & M. A. Giese, 2006) and psychophysical (D. A. Leopold, A. J. O'Toole, T. Vetter, & V. Blanz, 2001; G. Rhodes & L. Jeffery, 2006; R. Robbins, E. McKone, & M. Edwards, 2007) evidence suggests that faces are encoded as differences from a mean or prototypical face, consistent with the conceptual framework of a mean-centered face space (T. Valentine, 1991). However, it remains unclear how we encode facial similarity across classes such as gender, age, or race. We synthesized Caucasian male and female cross-gender “siblings” and “anti-siblings” by projecting vectors representing deviations of faces from one gender mean into another gender. Subjects perceived male and female pairings with similar vector deviations from their gender means as more similar, and those with opposite vector deviations as less similar, than randomly selected cross-gender pairings. Agreement in relative direction in a space describing how facial images differ from a mean can therefore provide a basis for perceived facial similarity. We further demonstrate that relative coding for male and female faces is based on the activation of a shared neural population by the transfer of an identity aftereffect between a face and its cross-gender sibling. These results imply whereas structural similarity may be reflected in the Euclidean distance between points in face space configural similarity may be coded by direction in face space.

Introduction
Faces provide a key channel for social communication conveying intrinsic information such as identity, sex, and age, as well as emotional and paralinguistic signals. Faces are thought to be encoded in a multidimensional face space (Valentine, 1991). According to this model, the neural encoding of faces is equivalent to representing faces as points or direction vectors in a multidimensional space, the dimensions of which characterize facial variation. Converging psychophysical (Leopold, O'Toole, Vetter, & Blanz, 2001; Rhodes & Jeffery, 2006; Robbins, McKone, & Edwards, 2007) and neurophysiological (Leopold, Bondar, & Giese, 2006) data supports the view that face space is mean- or prototype-centered with faces encoded as vector deviations from the central tendency. In face space, the identity trajectory of a face is the line that extends through that face from the mean. Faces on this trajectory have specific perceptual relationships to the original face. Faces between the mean (identity strength = 0) and the original face (identity strength = +1) are attenuated (anti-caricatured) versions of the original face. Faces beyond the original face (identity strength > +1) are exaggerations (caricatures) of the original face. Projecting the original face through the mean generates an anti-face (identity strength = −1), which is perceptually opposite to the original face. 
The face space model has provided explanations for why observers are faster at recognizing distinctive than typical faces but slower at classifying them in a face/non-face task (Johnston, Milne, & Williams, 1997). Groups of faces with different characteristics are thought to form distinct clusters in face space. This clustering has been used to explain the other-race effect (also known as the own-race bias or cross-race effect), that is, faces from the observer's own race are recognized more accurately than those from another race (Byatt & Rhodes, 2004; Chance, Turner, & Goldstein, 1982). This pattern of distribution also raises questions about how different groups of faces are encoded, namely, are faces encoded relative to their group mean or to a global mean of all faces? Previous studies have often manipulated face stimuli with reference to a mean, e.g., producing anti-faces or caricaturing, without explanation of how the mean face is chosen. Baudouin and Gallay (2006) raise this issue with regard to different sexes and demonstrate that male and female faces are bimodally distributed on dimensions thought to differentiate male and female faces and, crucially, on a generalized “gender” factor combining these dimensions. They also found that combined sex blended faces were more distinctive than single sex blends, indicating that they were encoded relative to a male or female, rather than a general, mean. The possibility of encoding relative to different group means challenges the concept of a face space with a single universal reference. 
Previous adaptation studies have shown that adapting to the anti-face of a particular identity biases the perceived identity of a neutral face toward the face from which it was derived (Leopold et al., 2001; Rhodes & Jeffery, 2006). Initial results indicated that perceived identity was only shifted by adaptation when the sex of the identity-transformed adapting face is the same as that of the test face (Little, DeBruine, & Jones, 2005). It was suggested that this reflects distinct populations of neurons for coding male and female faces. Identity is dissociable from underlying group visual characteristics in that it transfers between distinct groups within sex, e.g., female and hyper-female, whose structural differences are mathematically equivalent to the differences between male and female groups (Bestelmeyer et al., 2008). However transfer of identity adaptation between sexes was not observed by Bestelmeyer et al. Adaptation to face distortion (contraction or expansion) has been shown to transfer between sex, indicating that coding of male and female faces relies at least partially on a shared neural population (Jaquet & Rhodes, 2008) and viewing the exaggerated features of cartoons can adapt human face perception (Chen, Russell, Nakayama, & Livingstone, 2010) indicating that different representations of the face are processed by the same neural system. However, global transformations like distortion, caricaturing, and stylizing can be applied to all faces and other objects and may therefore be relatively independent of facial identity. Differences in identity involve more complex detailed local changes and have only recently been shown to produce cross-sex aftereffects. Rhodes et al. (2011) showed that within-sex identity aftereffects are larger than cross-sex identity aftereffects, i.e., the perceptual shift is greater if the adapting face is an anti-face created by projection through the same-sex mean than if it is an anti-face created by projection through a generic global mean. This was taken as evidence that sex-specific means are used in coding identity. 
Here, we investigate how encoding relative to multiple means relates to the perceived similarity of faces. We exploit the everyday phenomenon that we are able to perceive a resemblance between faces from different groups whose characteristics are altered globally due to sexual dimorphism, e.g., male and female siblings, or due to age, e.g., parents and infants. For example, perceived similarity and judgments of kinship in sibling pairs are closely linked and are good predictors of whether a pair of individuals are siblings (DeBruine et al., 2009; Maloney & Dal Martello, 2006). This ability is difficult to account for using a model in which faces are represented only as deviations from a fixed, global mean (Valentine, 1991). When encoded relative to a global mean, faces of the same gender will be clustered together in face space according to the characteristics that they share due to sexual dimorphism. Unrelated faces of the same sex may therefore be closer together than related faces from separate groups, e.g., brother and sister. 
To generate our stimulus set, we used a facial morphing technique based on optic flow image registration (Berisha, Johnston, & McOwan, 2010; Johnston, McOwan, & Benton, 1999) to represent male and female faces as vectors that describe shape and texture deviations from a group mean face. Although it has become commonplace to think, in general terms, that individual faces are described within a multidimensional space, whose dimensions encode the way in which faces vary, there is no way, at present, of determining what modes of variation are encoded to form the basis of human perceptual face space. It is important to note therefore that the dimensions of our stimulus face space are determined by the samples chosen and the choices made about how the faces are represented. However, it is likely that perceptual representations should reflect to some degree the constraints and sources of variation in natural images of faces. 
We construct a morph vector by concatenating the vector field required to register a face from a selected frame onto a reference frame along with the image resulting from warping the texture onto the reference frame. After all frames have been processed, the reference is recalculated as the mean of the set of morph vectors (Berisha et al., 2010). We refer to this as the morphed image space and this vector encodes how a facial image differs in shape and texture from the group mean. All our face images can be represented in this space. We can also use principal component analysis (PCA) to establish a reduced dimension stimulus face space for male and female faces. Deriving face space dimensions using PCA is attractive in that it produces an efficient model that encodes maximum image variance using minimal dimensions (Burton, Bruce, & Hancock, 1999; Calder, Burton, Miller, Young, & Akamatsu, 2001). We will refer to the spaces derived from PCA as stimulus face spaces. By projecting the vector difference of faces from their group mean into the stimulus face space of the other gender, we synthesized cross-gender equivalent faces and anti-faces. Subjects judged the similarity of these to their source faces, i.e., the likelihood of them being cross-gender siblings. If similar deviations from group means in the morphed image space give rise to similar appearances, this would suggest that similar deviation from gender-specific means underlies the perception of family resemblance in human vision. In order to establish whether perceived similarity is based on face-specific mechanisms, these judgments were also carried out on inverted faces. We also tested whether perceived similarity could be explained by proximity in the morphed image space or by simple pixelwise similarity of the target images in order to evaluate low-level explanations for perceived family resemblance. 
In a second experiment, we investigated cross-gender identity aftereffect transfer to reveal whether this relational coding is based on a shared population of neurons for different groups of faces. Previous studies have familiarized subjects not only with target faces but also with anti-caricatured faces that were drawn from the identity trajectories of the target faces that were subsequently used at test (Jiang, Blanz, & O'Toole, 2006; Leopold et al., 2001; Rhodes et al., 2011). We investigate a more generalized adaptation paradigm based on learned full identity strength targets and including faces that do not lie on the identity trajectories of the target faces. Bestelmeyer et al. (2008) showed that gender contingent identity aftereffects transfer between structurally different groups of faces if they are of the same sex (female and hyper-female) but not if they are of different sex (female and male; Bestelmeyer et al., 2008). Synthesized and averaged faces often have different perceptual characteristics to the faces from which they are derived due to smoothing of features resulting from loss of high spatial frequency information. A face's perceived gender can also be shifted by the omission of features such as hair. We therefore checked that any aftereffect was a genuine cross-gender effect by having subjects rate the sex of the target and adapting and test faces. 
We discuss our findings with reference to a flexible, rapidly reconfigurable system that allows faces to be referenced to both global and group means. The ability to access different representations of a face within face space would allow greater discriminability in response to shifting task demands and offers a fuller role for the conceptual framework of face space in everyday life. 
Experiment 1—Facial similarity/family resemblance
Methods
Stimuli
Ninety-eight female and 98 male faces (100 (w) × 120 (h) pixels) were vectorized separately for each sex using the morph-vectorization technique based on the Multi-channel Gradient Model (McGM; Johnston et al., 1999). We used 2D images of faces derived from the face database provided by the Max-Planck Institute for Biological Cybernetics in Tübingen, Germany (Troje & Bulthoff, 1996). This gave a 60,000-dimensional vector for each face comprising RGB and shape deviations from the group mean (see Berisha et al., 2010 for further details of the procedure). 
A basis set of 60 vectors for each sex was derived using PCA. Cross-gender equivalent faces were created by translating the morphed image vector of each face, relative to its sex mean, into the basis set of the other sex and reconstructing as shown in Figure 1. Stepwise, this involved (using a female face F as an example): 
Figure 1
 
Example of generation of a cross-gender equivalent face (female to male). (a) Female (pink) and male (blue) faces are vectorized separately and are represented as morphed image vectors in the morphed image space (indicated by gray axes). (b) PCA for each group gives modes of variation and group means. (c) Projection of mean relative female vector (red) into male PCA stimulus face space gives a synthesized equivalent male face (blue). Reversing the mean relative vector (multiplying by −1) gives the male anti-face (blue dashed). The male mean is added to the synthesized mean relative vector to yield the final vector of the male face in the morphed image space. The male equivalent or anti-face is reconstructed from this morph vector.
Figure 1
 
Example of generation of a cross-gender equivalent face (female to male). (a) Female (pink) and male (blue) faces are vectorized separately and are represented as morphed image vectors in the morphed image space (indicated by gray axes). (b) PCA for each group gives modes of variation and group means. (c) Projection of mean relative female vector (red) into male PCA stimulus face space gives a synthesized equivalent male face (blue). Reversing the mean relative vector (multiplying by −1) gives the male anti-face (blue dashed). The male mean is added to the synthesized mean relative vector to yield the final vector of the male face in the morphed image space. The male equivalent or anti-face is reconstructed from this morph vector.
  1.  
    Subtracting the mean female morphed image vector (F) from the sample face morphed image vector (F) to give the female mean relative vector (F F );
  2.  
    Finding the dot product of F F with the (60) vectors of the male basis set. This gives a set of 60 PC coefficients;
  3.  
    Multiplying the vectors of the male basis set with the PC coefficients to give the male mean relative vector F M ;
  4.  
    Adding the male mean vector (M) to F M to give the vector of the synthesized male “sibling” of F designated as M;
  5.  
    Reconstructing M from vector to image form.
Note that this process does not use the PCA-derived basis set for the female group (i.e., the source group) or require that the basis sets for male and female groups are parallel, since the female mean relative vector (F F ) is calculated in the morphed image space, not in the reduced dimensionality PCA space. However, the female mean relative vector (F F ) may not lie in the subspace describing male face variability once shifted to be relative to the male mean (M). To ensure the reconstruction looks male, without image artifacts, we need to project the female mean relative vector into the male PCA space and reconstruct the vector (F M ). This new male mean referenced vector will not necessarily be parallel to the original female mean relative vector (F F ) in the morphed image space. Since the bases of the male and female spaces are different and the ordering of the bases has been based, conventionally, on the amount of variance accounted for, the directions in the two spaces cannot be compared although the directions of mean relative vectors (F F and F M ) in the morphed image space can be compared. Ultimately, this disparate ordering of components does not matter for synthesis, as synthesis involves summing all the independent projections of F F onto the PCA components of the male basis set to give the PC coefficients of F M
An equivalent group of cross-gender anti-faces was derived using the same process by multiplying all PC coefficients created in stage 3 by −1. All faces were converted to grayscale and histogram-normalized to the luminance of the androgynous mean face to standardize image luminance. Faces were presented on a black background 5.7° vertically and 4.9° horizontally from fixation and subtended approximately 4.7° vertically by 3.7° horizontally. Subjects were not informed that some faces were synthesized; they were told that the contrast and apparent lighting of faces would vary because of inconsistent photographic techniques and that these differences should be ignored. 
Subjects
Twelve subjects (10 females) took part in Experiment 1a (perceptual similarity of cross-sex equivalent faces), and 12 further subjects (8 females) took part in Experiment 1b (perceptual similarity of cross-sex anti-faces). 
Design
Experiments were based on a 2-alternative forced choice (2AFC) similarity judgment task. Two pairs of faces were presented per trial, each pair consisting of an original face and a cross-gender synthesized face. Within each trial, the original faces were of the same gender in each pair. In Experiment 1a, the target pair comprised an original face and the cross-gender face synthesized from it (sibling pair); the distractor pair comprised an original face and a randomly selected cross-gender synthesized face. In Experiment 1b, the distractor pair comprised an original face and the cross-gender anti-face synthesized from it (anti-face pair); the target pair comprised an original face and a randomly selected cross-gender synthesized face. The position (top/bottom) and gender of the original members of each pair and the side (left/right) of the target and distractor pairs was counterbalanced. Each subject ran 2 blocks of 40 trials of either Experiment 1a or 1b; in one block, the faces were inverted. 
Procedure
Trials started with a central fixation cross (1.5 s). Faces were then presented for a maximum of 10 s (see Figure 2). Subjects were required to indicate which of the 2 male–female pairs was more similar (more likely to be brother and sister) by pressing the left or right arrow keys. Responses were not speeded (faces were removed after 10 s) and there was a 2-s ITI. 
Figure 2
 
Experiments 1a and 1b sample trials with original male faces and synthesized female faces. The task was to choose the more similar cross-gender pair (the pair most likely to be siblings). Results: Number of trials in which the expected pair was chosen as more perceptually similar (error bars indicate 95% confidence interval). Performance on all conditions was above chance (20/40 trials correct) except Experiment 1a inverted faces (p < 0.06).
Figure 2
 
Experiments 1a and 1b sample trials with original male faces and synthesized female faces. The task was to choose the more similar cross-gender pair (the pair most likely to be siblings). Results: Number of trials in which the expected pair was chosen as more perceptually similar (error bars indicate 95% confidence interval). Performance on all conditions was above chance (20/40 trials correct) except Experiment 1a inverted faces (p < 0.06).
Statistics
The number of trials in which subjects identified the target pair as the kin pair was calculated. In Experiment 1a, the target pair was the pair in which the cross-gender synthesized face was derived from the original face (as these have similar deviations in the morphed image space from the male and female means); in Experiment 1b, the target pair was the randomly selected pair (as the anti-face pair have opposite deviations from the male and female means in the morphed image space). Subjects' performance was compared to chance (50%) using a one-sample t-test; performance on upright and inverted faces was compared using paired t-tests. 
Image similarity between faces within the target and distractor pairs was calculated in two ways: 
  1.  
    Morphed image vector Euclidean distance: the square root of the sum of the squared differences between the equivalent units of the mean relative vectors representing each face in morphed image space.
  2.  
    Image-based Euclidean distance: the square root of the sum of the squared differences between equivalent pixel intensity values in each face.
For each trial, relative similarity scores were calculated [similarity of faces within the target pair (sibling pair in Experiment 1a and random pair in Experiment 1b) minus similarity of faces within the distractor pair (random pair in Experiment 1a and anti-face pair in Experiment 1b)] for both measures of similarity. Correlations between relative similarity scores and accuracy were calculated for upright and inverted conditions in both Experiments 1a and 1b. 
Results and discussion
Subjects chose the target pair (sibling pair in Experiment 1a and random pair in Experiment 1b) significantly more often than chance in all conditions (one-sample t-tests, all p < 0.05) except inverted faces in Experiment 1a in which accuracy was marginally above chance (p < 0.06). As shown in Figure 2, subjects chose the target pair more often when the faces were upright than when they were inverted (paired t-tests, 2-tailed, df = 11, Experiment 1a, t = 5.83, p < 0.0001; Experiment 1b, t = 6.42, p < 0.00005). Judgments of similarity might tap a generic pattern matching mechanism; however, the highly significant inversion effects indicate that subjects' judgments were based on facial configurations, not simply on facial features. There were no differences between response latencies in any condition (all p > 0.3; paired t-test) except for inverted faces in Experiment 1a in which target responses were faster than distractor responses (p = 0.01). 
Crucially, the perception of similarity between faces cannot be explained by greater proximity of images in the morphed image vector space nor by relative pixelwise similarity of the image pairs. Image-based similarity of inverted faces in Experiment 1b was correlated with accuracy (r(78) = 0.284, p = 0.011, not corrected for multiple comparisons), but no other correlations were significant (all p > 0.1). The correlation between pixelwise image similarity and accuracy for inverted faces in Experiment 1b may have emerged because neither pair appeared similar (because they were an anti-face pairing and a random pairing). In the absence of face-specific mechanisms as used in upright face processing, subjects fell back on basic measures of image similarity. However, it is clear that in upright conditions, in which face-sensitive configural processing mechanisms are dominant, performance was not based on simple feature comparison or image similarity. 
The encoding of male and female faces referenced to separate group means raises new questions regarding the structure and neural underpinnings of face space. If all faces are encoded relative to a global mean, it would suggest that all face encoding occurs in a shared population of neurons regardless of face category. Does the encoding of faces relative to different group means indicate separate coding populations for different groups? Previous studies have suggested distinct or dissociable coding for male and female faces but have used gross image distortions (Jaquet & Rhodes, 2008) or tasks that do not require explicit identification of faces (Bestelmeyer et al., 2008; Little et al., 2005). The nature of neural coding for more complex identity discriminations in different groups remains relatively under-investigated (Rhodes et al., 2011). We therefore tested whether face transformations of the type used to create our stimuli are encoded by a shared neural population for male and female faces. 
Experiment 2—Cross-gender identity adaptation
Methods
Stimuli
A PCA space was created for each gender as for Experiment 1 except that faces were normalized for luminance before vectorization, removing the need for luminance correction of synthesized faces. Three existing female face vectors that were close to mutually orthogonal in the PCA space were selected. These were then made mutually orthogonal using the Gram–Schmidt technique and the length of these vectors was then standardized prior to reconstructing 3 target faces: “Angela” (A), “Barbara” (B), and “Carol” (C). Cross-gender (male) anti-faces of these target faces were constructed using the technique previously described. Test stimuli were faces lying on the three “axes” (A/B, A/C, B/C) between the 0.7 identity strength anti-caricatures of each identity. The vector forms of the target faces were combined such that the total identity strength of the test faces was always 0.7; 8 test faces were created on each axis with the contributing identity strength of each target face ranging from 0 to 0.7 in increments of 0.1. For example, on the A/B axis, test faces ranged from 0.7 × Angela + 0 × Barbara to 0 × Angela + 0.7 × Barbara (see Figure 3). Female target and test faces subtended a visual angle of approximately 10.2° × 8.2°. Male adapting faces subtended a visual angle of approximately 10.8° × 8.6°. Fixation point, training, and adapting faces were presented centrally; test faces were presented at 8 locations on a circle 8.2° from fixation. Faces were presented on a black background rectangle on a mid-gray screen. 
Figure 3
 
Target faces, adapting faces, and sample test faces for Experiment 2.
Figure 3
 
Target faces, adapting faces, and sample test faces for Experiment 2.
Subjects
Thirteen subjects (6 females) participated in this experiment. 
Design
Subjects performed 4 blocks. They were adapted either to the male mean (baseline condition) or to the male anti-face of one of the target female faces (adapted condition) in each block (A–B–B–A design). Each block contained 192 trials: 8 trials per level per target axis (A/B, A/C, B/C) giving 16 trials for each data point in both conditions. Each subject was adapted to only one anti-face (counterbalanced across subjects: 5 to male anti-Angela, 4 to male anti-Barbara, 4 to male anti-Carol). 
Procedure
Blocks were run at least 1 day apart. Before each block, subjects were shown each target face (identity strength = +1) 4 times for 3 s with its name. They were then tested on recognition of these faces by presenting faces centrally for 1 s in pseudorandom (miniblocked) order. Subjects identified the face by a key press and were given feedback. When they had correctly identified the face on 18 consecutive trials, the adaptation block began. 
Subjects underwent a 30-s adaptation at the beginning of each block and after a mid-block rest, plus a 5-s top-up adaptation on every trial. Subjects performed a cued 2AFC task on all possible face combinations (A/B, A/C, B/C). Subjects were shown the names of the two target faces between which the test face lay, e.g., “Angela or Barbara” for 2 s. The male mean or male adapting anti-face was shown for 5 s followed by a 500-ms central fixation and a test face for 200 ms (see Figure 4). The 500-ms fixation point between the central adapting and peripheral test faces was used to eliminate any motion transient. Subjects were free to visually explore the adapting faces but were told to return their gaze to the fixation point before the test face. 
Figure 4
 
Trial schematic for Experiment 2.
Figure 4
 
Trial schematic for Experiment 2.
Statistics
Each subject returned 6 data sets: 3 axes (A/B, A/C, B/C) × 2 conditions (baseline: following exposure to the male mean and adapted: following exposure to a male anti-face). Cumulative Gaussian functions were fitted using psignifit (see http://bootstrap-software.org/psignifit/; Wichmann & Hill, 2001). For each axis, equal upper and lower slip rates were calculated using combined baseline and adapted data. Separate functions were then fitted for each condition using these slip rates and the point of subjective equality (PSE) extracted for each function. We predict that, compared to baseline, adaptation to the anti-face will shift the PSE on axes involving the adapting (anti-)face toward the female identity from which the male adapting anti-face was derived. For example, if the adapting face is male anti-A, the PSE on the relevant test axes (A/B and A/C) should be shifted toward A. For each subject, mean baseline and adapted PSEs for the relevant axes were defined, such that a perceptual shift in the predicted direction results in a reduction in PSE. The mean baseline and adapted PSEs on the relevant axes were calculated and compared with a paired t-test (2-tailed). 
Perceived sex of testing faces
Stimuli
The experimental faces (3 female test faces, 3 male anti-faces, male mean, and 6 sample mid-axis test faces) were shown. The remaining male and female faces from which the separate PCA spaces were derived (98 males and 94 females) and female mean were also included as dummy stimuli but are not considered further. Female faces subtended a visual angle of approximately 10.2° × 8.2°. Male faces subtended a visual angle of approximately 10.8° × 8.6°. All faces were centrally presented. 
Subjects
Twenty-nine subjects (21 females) took part in this experiment. Eight of whom subsequently took part in Experiment 2 following this experiment. 
Design and procedure
Faces were presented to the subject in a random order for 1 s preceded by a 500-ms fixation point and followed by a 1500-ms ITI. Subjects responded at their own pace by pressing keys marked 1–7. For 14 subjects, 1 = definitely male, 7 = definitely female; for the remaining 15 subjects, this scale was reversed and standardized later for analysis. 
Results and discussion
All subjects showed a mean perceptual shift toward the female face from which the adapting male anti-face was derived (min = 0.003; max = 0.130). The mean PSE in the baseline condition was 0.370 (SD = 0.058). In the adapted condition, this dropped to 0.310 (SD = 0.058). This perceptual shift was highly significant (paired t-test, 2-tailed, df = 12; t = 5.73, p < 0.0001; see Figure 5). These identity aftereffects cannot be explained by low-level visual adaptation since adapting and test faces were presented in different retinotopic locations (see Methods section). Nor can they be explained by a generalized shift in PSE due to exposure to a face other than the mean; since the identity of the adapting male anti-face was counterbalanced across subjects, the predicted direction of perceptual shift due to adaptation is reversed between subjects. 
Figure 5
 
Mean PSEs for Experiment 2 (error bars show 95% confidence interval) after exposure to male mean (baseline) and male anti-face (adapted). Adaptation in the direction of the female face from which the male adapting anti-face was derived is reflected by a decrease in PSE. Objective midpoint of dependent variable is 0.35.
Figure 5
 
Mean PSEs for Experiment 2 (error bars show 95% confidence interval) after exposure to male mean (baseline) and male anti-face (adapted). Adaptation in the direction of the female face from which the male adapting anti-face was derived is reflected by a decrease in PSE. Objective midpoint of dependent variable is 0.35.
This identity aftereffect does not simply reflect within-gender identity adaptation since the male and female faces formed two non-overlapping groups of different perceived gender. On a 7-point scale (1 = definitely male, 7 = definitely female), the female target identities were rated 3.97, 4.62, and 5.97, male anti-faces were rated 1.52, 1.55, and 1.93, and the male mean was rated 2.17. Sample faces from around the midpoints on the female test face axes were rated from 5.17 to 6.07 (mean = 5.75, SD = 0.39). 
General discussion
The conceptual framework of a prototype-based face space has provided powerful explanations for many perceptual phenomena involving faces. Single-cell recordings in macaques have revealed cells representing the direction of faces from the mean, linked to an opponent cell representing the anti-face (Leopold et al., 2006). The challenge for this approach is to account for how the mean is arrived at and to address issues of whether we use multiple means and whether mean-based referencing is flexible and reconfigurable. Our results provide evidence for the continuing refinement of the concept of face space and a movement away from a rigid framework toward a flexible, dynamic model. 
Here, we asked whether shifting relative information with respect to group means reflecting sexual dimorphism retains something of the quality of the initial face, even though the faces reflect different directions in morphed image space relative to a global mean. Faces whose mean relative vectors in morphed image space have similar directions, but different locations, are judged to be similar. That faces are compared to a group mean for detection of a family resemblance is supported by our specific exclusion of the possibility that perceived facial similarity is based solely on the absolute proximity of faces in a single morphed image space. This is intuitively reasonable as caricatures and anti-caricatures have the same direction relative to a mean and are perceived as different versions of the same identity but could be quite far apart in terms of Euclidean distance. One interpretation of this result is that whereas individual faces are mapped to positions in perceptual face space, i.e., they are represented as a fixed vector in the space, relative information in perceptual face space, reflecting family resemblance, is best characterized as a free vector, representing a transformation of the facial image. We therefore expect that this resemblance should be apparent for other global transformations, e.g., a Caucasian and an Asian face should look similar if they share the same direction in face space from their respective group means even though they may be some distance apart in a universal face space. It would be of interest to apply the same methods to race, since experiments with gross distortions of faces (Jaquet, Rhodes, & Hayward, 2007, 2008) and more subtle configural changes (Little, DeBruine, Jones, & Waitt, 2008) have shown little or no transfer between images of different racial groups. 
Facial similarity appears to reflect judgments of whether two faces have been transformed in the same way relative to different group means. However, in Experiment 2, the identity of a task-irrelevant male adapting face clearly influenced the perceived identity of a subsequent female test face. This implies that male and female faces are encoded within the same space and is difficult to accommodate under an assumption of separate mean-based male and female face spaces. We suggest that the aftereffect transfer of adaptation follows naturally from the proposal of a multidimensional face space, with each dimension of face space opponent coded and independent adaptation of each dimension. Two pool opponent coding for each dimension has emerged as a strong possibility with separate populations of cells tuned to opposite ends of each dimension (Leopold et al., 2006, 2001; Robbins et al., 2007; Susilo, McKone, & Edwards, 2010). The mean is implicitly related to the relative activity of all the opponent mechanisms across the population of cells encoding the dimensions of the description. Faces are distinguished on the basis of their component values on these dimensions with distinctive faces having overall larger absolute component values than typical faces. On this model, faces differ in the component values required to describe them. More distinctive faces will have component values that differ more from the component means. Adapting to a face shifts the balance between opponent systems coding each dimension. Adaptation of these components, which represent variation in facial shape, would remain wherever the test stimulus lies with respect to a sex axis and hence remain in the representation of this face relative to a task-relevant mean. Since any subsequent face is initially encoded using the shared opponent coding mechanism, the adaptation would remain even if the test face is later encoded relative to a group mean different from the group mean of the adapting face, e.g., the female mean vs. male mean in Experiment 2
Operationally, the shift of encoding of a face from relative to a global mean to relative to a group mean is equivalent to subtracting the vector of the group mean from the face's position vector relative to a single global mean. This comparison to the relevant mean reflects the “partialling out” of task-irrelevant facial information. Indeed, differences in age and sex within possible child sibling pairs appear to be largely ignored and account for negligible variance in facial similarity judgments, whereas relatedness accounts for the vast majority of the variance (Maloney & Dal Martello, 2006). However, this raises questions of how one identifies the group means against which relative information is compared. Face encoding may be shifted to be relative to the nearest available mean in a global face space in order to maximize encoding efficiency or to a group mean that is explicitly demanded by the task. Alternatively, the process may proceed in an iterative fashion from general to specific information. Global information that underlies face categorization has been shown to precede finer information within single neurons in macaques (Sugase, Yamane, Ueno, & Kawano, 1999). Information from multiple stages of face processing, reflecting increasingly more detailed localization in face space, may therefore be available to later perceptual and decision-making structures depending on task. 
The long-term establishment of means available for later comparison is a major issue for face encoding. Previous adaptation studies have indicated that the mean of face space can be shifted as a result of prolonged or repeated exposure to faces (Leopold et al., 2001; Rhodes & Jeffery, 2006). Long-term adaptation may act so as to represent the global mean as the point of equal activation of the opponent mechanisms on all dimensions so as to maximize encoding efficiency and to tune face space to be maximally sensitive for face discrimination (Rhodes, Watson, Jeffery, & Clifford, 2010). In the long term, group means may also be established via repeated exposure of in-group faces that cluster together in the global face space. For example, the reduction with experience of the other-race effect (Hancock & Rhodes, 2008; McKone, Brewer, MacPherson, Rhodes, & Hayward, 2007) may reflect the process of developing separate group means for different races. Individuation within race group yields greater increases in other-race recognition performance than categorization between groups (Tanaka & Pierce, 2009). This may stem from more accurate localization of the group mean via analysis of dimensions orthogonal to the global mean–group mean axis. This is necessary because the component values of all faces within the group will necessarily be tightly distributed on this axis and therefore would be a poor basis for individuation. 
In conclusion, previous models of face space based exclusively on a single mean are inadequate to explain the results reported here. Faces can also be encoded relative to local means when judging similarity of faces from different groups. Encoding relative to a face's group mean is crucial for judging similarity and this relative encoding is based on a shared neural population for different groups of faces. This leads us to propose an opponent-coded component-based face space that can be used flexibly to recover task-relevant information. 
Acknowledgments
This work was supported by the EPSRC, grant number EP/F037384/1. 
Commercial relationships: none. 
Corresponding author: Harry John Griffin. 
Email: harry.griffin@ucl.ac.uk. 
Address: Cognitive, Perceptual and Brain Sciences, University College London, 26 Bedford Way, London, WC1H 0AP, UK. 
References
Baudouin J. Y. Gallay M. (2006). Is face distinctiveness gender based? Journal of Experimental Psychology: Human Perception and Performance, 32, 789–798. [CrossRef] [PubMed]
Berisha F. Johnston A. McOwan P. W. (2010). Identifying regions that carry the best information about global facial configurations. Journal of Vision, 10, (11):27, 1–8, http://www.journalofvision.org/content/10/11/27, doi:10.1167/10.11.27. [PubMed] [Article] [CrossRef] [PubMed]
Bestelmeyer P. E. G. Jones B. C. DeBruine L. M. Little A. C. Perrett D. I. Schneider A. et al. (2008). Sex-contingent face aftereffects depend on perceptual category rather than structural encoding. Cognition, 107, 353–365. [CrossRef] [PubMed]
Burton A. M. Bruce V. Hancock P. J. B. (1999). From pixels to people: A model of familiar face recognition. Cognitive Science, 23, 1–31. [CrossRef]
Byatt G. Rhodes G. (2004). Identification of own-race and other-race faces: Implications for the representation of race in face space. Psychonomic Bulletin & Review, 11, 735–741. [CrossRef] [PubMed]
Calder A. J. Burton A. M. Miller P. Young A. W. Akamatsu S. (2001). A principal component analysis of facial expressions. Vision Research, 41, 1179–1208. [CrossRef] [PubMed]
Chance J. E. Turner A. L. Goldstein A. G. (1982). Development of differential recognition for own- and other-race faces. Journal of Psychology, 112, 29–37. [CrossRef] [PubMed]
Chen H. W. Russell R. Nakayama K. Livingstone M. (2010). Crossing the ‘uncanny valley’: Adaptation to cartoon faces can influence perception of human faces. Perception, 39, 378–386. [CrossRef] [PubMed]
DeBruine L. M. Smith F. G. Jones B. C. Roberts S. C. Petrie M. Spector T. D. (2009). Kin recognition signals in adult faces. Vision Research, 49, 38–43. [CrossRef] [PubMed]
Freiwald W. A. Tsao D. Y. Livingstone M. S. (2009). A face feature space in the macaque temporal lobe. Nature Neuroscience, 12, 1187–1196. [CrossRef] [PubMed]
Hancock K. J. Rhodes G. (2008). Contact, configural coding and the other-race effect in face recognition. British Journal of Psychology, 99, 45–56. [CrossRef] [PubMed]
Jaquet E. Rhodes G. (2008). Face aftereffects indicate dissociable, but not distinct, coding of male and female faces. Journal of Experimental Psychology: Human Perception and Performance, 34, 101–112. [CrossRef] [PubMed]
Jaquet E. Rhodes G. Hayward W. G. (2007). Opposite aftereffects for Chinese and Caucasian faces are selective for social category information and not just physical face differences. Quarterly Journal of Experimental Psychology, 60, 1457–1467. [CrossRef]
Jaquet E. Rhodes G. Hayward W. G. (2008). Race-contingent aftereffects suggest distinct perceptual norms for difference race faces. Visual Cognition, 16, 734–753. [CrossRef]
Jiang F. Blanz V. O'Toole A. J. (2006). Probing the visual representation of faces with adaptation—A view from the other side of the mean. Psychological Science, 17, 493–500. [CrossRef] [PubMed]
Johnston A. McOwan P. W. Benton C. P. (1999). Robust velocity computation from a biologically motivated model of motion perception. Proceedings of the Royal Society of London B: Biological Sciences, 266, 509–518. [CrossRef]
Johnston R. A. Milne A. B. Williams C. (1997). Do distinctive faces come from outer space An investigation of the status of multidimensional face-space. Visual Cognition, 4, 59–67. [CrossRef]
Leopold D. A. Bondar I. V. Giese M. A. (2006). Norm-based face encoding by single neurons in the monkey inferotemporal cortex. Nature, 442, 572–575. [CrossRef] [PubMed]
Leopold D. A. O'Toole A. J. Vetter T. Blanz V. (2001). Prototype-referenced shape encoding revealed by high-level after effects. Nature Neuroscience, 4, 89–94. [CrossRef] [PubMed]
Little A. C. DeBruine L. M. Jones B. C. (2005). Sex-contingent face after-effects suggest distinct neural populations code male and female faces. Proceedings of the Royal Society B: Biological Sciences, 272, 2283–2287. [CrossRef]
Little A. C. DeBruine L. M. Jones B. C. Waitt C. (2008). Category contingent aftereffects for faces of different races, ages and species. Cognition, 106, 1537–1547. [CrossRef] [PubMed]
Maloney L. T. Dal Martello M. F. (2006). Kin recognition and the perceived facial similarity of children. Journal of Vision, 6, (10):4, 1047–1056, http://www.journalofvision.org/content/6/10/4, doi:10.1167/6.10.4. [PubMed] [Article] [CrossRef]
McKone E. Brewer J. L. MacPherson S. Rhodes G. Hayward W. G. (2007). Familiar other-race faces show normal holistic processing and are robust to perceptual stress. Perception, 36, 224–248. [CrossRef] [PubMed]
Rhodes G. Jaquet E. Jeffery L. Evangelista E. Keane J. Calder A. J. (2011). Sex-specific norms code face identity. Journal of Vision, 11, (1):1, 1–11, http://www.journalofvision.org/content/11/1/1, doi:10.1167/11.1.1. [PubMed] [Article] [CrossRef] [PubMed]
Rhodes G. Jeffery L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46, 2977–2987. [CrossRef] [PubMed]
Rhodes G. Watson T. L. Jeffery L. Clifford C. W. (2010). Perceptual adaptation helps us identify faces. Vision Research, 50, 963–968. [CrossRef] [PubMed]
Robbins R. McKone E. Edwards M. (2007). Aftereffects for face attributes with different natural variability: Adapter position effects and neural models. Journal of Experimental Psychology: Human Perception and Performance, 33, 570–592. [CrossRef] [PubMed]
Sugase Y. Yamane S. Ueno S. Kawano K. (1999). Global and fine information coded by single neurons in the temporal visual cortex. Nature, 400, 869–873. [CrossRef] [PubMed]
Susilo T. McKone E. Edwards M. (2010). What shape are the neural response functions underlying opponent coding in face space? A psychophysical investigation. Vision Research, 50, 300–314. [CrossRef] [PubMed]
Tanaka J. W. Pierce L. J. (2009). The neural plasticity of other-race face recognition. Cognitive Affective & Behavioral Neuroscience, 9, 122–131. [CrossRef]
Troje N. F. Bulthoff H. H. (1996). Face recognition under varying poses: The role of texture and shape. Vision Research, 36, 1761–1771. [CrossRef] [PubMed]
Valentine T. (1991). A unified account of the effects of distinctiveness, inversion and race in face recognition. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 42, 161–204. [CrossRef]
Wichmann F. A. Hill N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [CrossRef] [PubMed]
Figure 1
 
Example of generation of a cross-gender equivalent face (female to male). (a) Female (pink) and male (blue) faces are vectorized separately and are represented as morphed image vectors in the morphed image space (indicated by gray axes). (b) PCA for each group gives modes of variation and group means. (c) Projection of mean relative female vector (red) into male PCA stimulus face space gives a synthesized equivalent male face (blue). Reversing the mean relative vector (multiplying by −1) gives the male anti-face (blue dashed). The male mean is added to the synthesized mean relative vector to yield the final vector of the male face in the morphed image space. The male equivalent or anti-face is reconstructed from this morph vector.
Figure 1
 
Example of generation of a cross-gender equivalent face (female to male). (a) Female (pink) and male (blue) faces are vectorized separately and are represented as morphed image vectors in the morphed image space (indicated by gray axes). (b) PCA for each group gives modes of variation and group means. (c) Projection of mean relative female vector (red) into male PCA stimulus face space gives a synthesized equivalent male face (blue). Reversing the mean relative vector (multiplying by −1) gives the male anti-face (blue dashed). The male mean is added to the synthesized mean relative vector to yield the final vector of the male face in the morphed image space. The male equivalent or anti-face is reconstructed from this morph vector.
Figure 2
 
Experiments 1a and 1b sample trials with original male faces and synthesized female faces. The task was to choose the more similar cross-gender pair (the pair most likely to be siblings). Results: Number of trials in which the expected pair was chosen as more perceptually similar (error bars indicate 95% confidence interval). Performance on all conditions was above chance (20/40 trials correct) except Experiment 1a inverted faces (p < 0.06).
Figure 2
 
Experiments 1a and 1b sample trials with original male faces and synthesized female faces. The task was to choose the more similar cross-gender pair (the pair most likely to be siblings). Results: Number of trials in which the expected pair was chosen as more perceptually similar (error bars indicate 95% confidence interval). Performance on all conditions was above chance (20/40 trials correct) except Experiment 1a inverted faces (p < 0.06).
Figure 3
 
Target faces, adapting faces, and sample test faces for Experiment 2.
Figure 3
 
Target faces, adapting faces, and sample test faces for Experiment 2.
Figure 4
 
Trial schematic for Experiment 2.
Figure 4
 
Trial schematic for Experiment 2.
Figure 5
 
Mean PSEs for Experiment 2 (error bars show 95% confidence interval) after exposure to male mean (baseline) and male anti-face (adapted). Adaptation in the direction of the female face from which the male adapting anti-face was derived is reflected by a decrease in PSE. Objective midpoint of dependent variable is 0.35.
Figure 5
 
Mean PSEs for Experiment 2 (error bars show 95% confidence interval) after exposure to male mean (baseline) and male anti-face (adapted). Adaptation in the direction of the female face from which the male adapting anti-face was derived is reflected by a decrease in PSE. Objective midpoint of dependent variable is 0.35.
© 2011 ARVO
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×