Abstract
Many researchers agree that faces are represented in a multidimensional face space with the “mean face” at the origin [Valentine, Q. J. Exp. Psych. 43(2), 1991]. It is often assumed that the component dimensions of this space involve appearance based features, but there is relatively little empirical data to support that view. The research described here was designed to address this issue by systematically manipulating images of human faces using a variety of homeomorphic transformations, some of which resembled craniofacial changes that can occur in nature (e.g., growth), whereas others did not. Each type of transformation was applied with several different magnitudes, and the differences in image structure they produced were measured using several possible metrics involving either pixel intensities or wavelet outputs.
Observers performed a sequential face matching task in which they viewed an image of a standard face, followed by a transformed version of either the same face or a different face, and they were required to judge whether the two successive images depicted the same individual. The results revealed that performance deteriorates with the magnitude of image change for each possible transformation type. However, there were also clear differences among the different types of transformations that cannot be explained by simple differences in low level image structure. These findings suggest that observers' judgments may have been based on configural relations among facial features that can remain relatively invariant over some types of transformations, but not others.
This research was supported by a grant from NSF (BCS-0546107).