August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
View transformations in face space: A computational approach
Author Affiliations
  • Hugh R. Wilson
    Centre for Vision Research, York University, Toronto
Journal of Vision August 2009, Vol.9, 540. doi:https://doi.org/10.1167/9.8.540
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hugh R. Wilson; View transformations in face space: A computational approach. Journal of Vision 2009;9(8):540. https://doi.org/10.1167/9.8.540.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although a preponderance of research in face perception has focused on front views, the visual system analyzes many individual faces over a significant range of left/right views. The concept of Face Space (Valentine, 1991) suggested that faces might be represented by a learned prototype plus a distance and direction of individual variation from the prototype. This proposal tacitly dealt with front views only and avoided the problem of generalization across views. One way of rectifying this would be to construct a complete face space for each of N distinctive views and then somehow link views of the same individual across view spaces. However, storage of N views of every face produces a heavy memory storage load. Here I propose a hybrid model that retains the appeal of face space and generalizes it to multiple views with a minimum of additional memory storage. Furthermore, it specifies a transformation for comparison of face images across views. Finally, it accounts for errors in human matching across face views.

The computation is based on the face space concept that individual faces in front view are encoded as deviations from a learned front view prototype. To this are added learned prototypes for other views. To compare a side view to a front view, the view is first estimated from the image, and the relevant view prototype is then subtracted from the image information. Matching occurs by computing the distance between the residual image information and stored information in the front view space. Simulations using a data base of 80 Caucasian faces faces produce accuracy rates of 92%, in good agreement with human data. This approach also suggests explanations for face deficits in the elderly and in prosopagnosia.

Wilson, H. R. (2009). View transformations in face space: A computational approach [Abstract]. Journal of Vision, 9(8):540, 540a, http://journalofvision.org/9/8/540/, doi:10.1167/9.8.540. [CrossRef]
Footnotes
 CIHR grant #172103
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×