Abstract
What is the structure of human face space? Despite extensive work on this topic, the main qualitative properties of this space are a matter of debate and a quantitative description is still missing. Our work adopts a large-scale computational approach based on high-throughput modeling with the aim of providing a rigorous quantitative description of face space. To this end, first, we collect behavioral ratings of face similarity across a large number of stimuli and, second, we examine the structure of the space underlying human performance by means of computational modeling. Specifically, we evaluate thousands of model architectures and parameter instantiations with respect to their ability to account for the properties of human face space. Our results support three main conclusions: 1) an architecture based on independent component analysis (ICA) provides a better fit to empirical data than analogous ones based on principal component analysis (PCA) or linear discriminant analysis (LDA); 2) simple metrics (e.g. Euclidean) account better for the similarity structure of face space than complex ones (e.g. Mahalanobis); 3) color information is encoded in individual face representations along with luminance-based information. In addition, we find that an implementation of diagnostic component (e.g. principal component) selection improves the fits to empirical data and provides markedly smaller estimates of dimensionality. Thus, the present work fleshes out some of the main properties of human face space and, more generally, it lays out a rigorous and detailed computational approach to the study of human face recognition.
Meeting abstract presented at VSS 2012