Purchase this article with an account.
Naphtali Abudarham, Galit Yovel; Reverse-engineering the Face-Space: Discovering the Crucial Features for Face Identification. Journal of Vision 2014;14(10):563. doi: 10.1167/14.10.563.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Despite extensive research, there is little knowledge as to which facial features are critical for face identification. Here we present a "reverse engineering" approach to this problem: testing which features should be changed such that a modified face is no longer recognized as the original person. Based on the Face-Space theory (Valentine, 1989), we constructed a multidimensional feature space, wherein each dimension is a different feature and each face is represented as a feature vector. To do this, we defined a set of 20 features (e.g. eyebrow thickness, skin texture, lip thickness etc.) and asked subjects to assign values to these features for each face (e.g, rate eyebrow thickness on a "very narrow - very thick" scale). To modify faces, the distinctive (i.e. far from average) features in each face were replaced, by copying features with "opposite" values, from other faces in the data-set. To assess whether distance in face space was correlated with perceptual similarity judgements, face identification was measured by presenting pairs of pictures, before and after modification, and asking subjects to rate the extent to which the two faces are same or different people. Results show that distances between feature-vectors of faces were correlated with perceptual similarity judgments between faces, validating the dimensions of the face space. This correlation increased when each feature was weighted according to its inter-rater reliability measure. Specifically, a sub-set of 7 features, which include hair color and length, eyes shape and color, eyebrows-thickness, ears-protrusion and lip-thickness, accounted for most of the variability in perceptual similarity scores between faces. We conclude that these features, to which we are most sensitive, may be critical for face identification.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only