Theoretical and physical work by Parke (
1974); Jones and Poggio (
1996); Ezzat and Poggio (
1996); DeCarlos, Metaxas, and Stone (
1998); and Cootes, Edwards, and Taylor (
1998) created the foundation for a face space. In 1999, Blanz and Vetter created a concrete, “physical” face space by laser scanning real human faces and creating smooth, three-dimensional (3D) models. In their face space, novel face exemplars could be generated from the sample exemplars by separately adjusting the 3D shape and texture parameters. Applying principal components analysis (PCA) to the resulting dimensions produced a set of
eigenfaces, candidate dimensions for face representations. Following Blanz and Vetter's (
1999) work, other researchers (Wilson, Loffler, & Wilkinson,
2002; Davidenko,
2007; Chang & Tsao,
2017) have generated physical face space models based on the distribution of “real” face stimuli. Although the resulting dimensions vary from study to study, they tend to describe global and configural properties of faces (e.g., adiposity, protrusion of the forehead and chin, distance between the eyes), rather than individual features. Wilson et al. (
2002) created a “synthetic face space” based on images of real faces. Each face was coded with 37 landmark points based on radial measurements at equally spaced angles around the head (including hairline and head shape). The low dimensionality of this space facilitated the process of coding faces and describing the underlying dimensions. However, two major limitations of the synthetic face space were the use of generic features (e.g., eyes and mouth) that did not differ across individual faces, and the low number and racial homogeneity of the faces included in the space. Given that the eyes and mouth contribute strongly to face identification (see Schyns, Bonnar, & Gosselin,
2002; Smith, Cottrell, Gosselin, & Schyns,
2005), this method is limited in its applicability to realistic face recognition behavior. Later, Davidenko (
2007) created a similar landmark-based face space to describe the variability of profile face silhouettes. This model was based on the manual coding of 18 keypoint locations (36 XY coordinates) on a large number of profile face images. A PCA revealed the underlying dimensions of the space, and behavioral ratings confirmed that the space can be effectively described by its first 20 dimensions. Although the method did not rely on using prespecified generic facial features, it was limited by the lack of texture information and feature details about the eyes, nose, and mouth.