Abstract
Although a preponderance of research in face perception has focused on front views, the visual system analyzes many individual faces over a significant range of left/right views. The concept of Face Space (Valentine, 1991) suggested that faces might be represented by a learned prototype plus a distance and direction of individual variation from the prototype. This proposal tacitly dealt with front views only and avoided the problem of generalization across views. One way of rectifying this would be to construct a complete face space for each of N distinctive views and then somehow link views of the same individual across view spaces. However, storage of N views of every face produces a heavy memory storage load. Here I propose a hybrid model that retains the appeal of face space and generalizes it to multiple views with a minimum of additional memory storage. Furthermore, it specifies a transformation for comparison of face images across views. Finally, it accounts for errors in human matching across face views.
The computation is based on the face space concept that individual faces in front view are encoded as deviations from a learned front view prototype. To this are added learned prototypes for other views. To compare a side view to a front view, the view is first estimated from the image, and the relevant view prototype is then subtracted from the image information. Matching occurs by computing the distance between the residual image information and stored information in the front view space. Simulations using a data base of 80 Caucasian faces faces produce accuracy rates of 92%, in good agreement with human data. This approach also suggests explanations for face deficits in the elderly and in prosopagnosia.