June 2007
Volume 7, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Human face matching performance is robust to task-irrelevant image changes
Author Affiliations
  • Danelle Wilbraham
    Department of Psychology, Ohio State University
  • Aleix Martinez
    Department of Electrical and Computer Engineering, Ohio State University
  • James Christensen
    Department of Psychology, Ohio State University
  • James Todd
    Department of Psychology, Ohio State University
Journal of Vision June 2007, Vol.7, 890. doi:https://doi.org/10.1167/7.9.890
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Danelle Wilbraham, Aleix Martinez, James Christensen, James Todd; Human face matching performance is robust to task-irrelevant image changes. Journal of Vision 2007;7(9):890. https://doi.org/10.1167/7.9.890.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A popular conception of face representation is that faces are represented in a multidimensional face space with the “mean face” at the origin [Valentine, Q. J. Exp. Psych. 43(2), 1991]. Several studies have hypothesized that appearance-based features may account for many of the dimensions of this face space. We examined the relationship between appearance-based dimensions and judgments made by human observers in two experiments. In the first experiment, a match-to-sample paradigm was used where observers saw a sample face followed by two alternative faces which varied from the sample in expression or illumination. Observers indicated which of the alternatives shared the same identity as the sample. The second experiment employed a sequential matching paradigm in which the standard face was partially occluded by a checkerboard grid of small black squares. The comparison face was occluded by the same checkerboard as the standard, a reversed checkerboard, a checkerboard shifted in phase by 90 degrees, or no checkerboard. Observers indicated if the two faces were of the same or different identities. Presentations in both experiments were masked by a full screen pattern.

Several computational appearance-based algorithms were investigated. Both the pixel intensities and the outputs of Gabor filters were used as input. Euclidean distances were calculated between the inputs for the pairs of images in the experiment. In addition, both intensities and Gabor filter outputs were subjected to several variations of Principle Components Analysis. The results revealed that though these techniques produced reasonably accurate performance in many of the conditions tested, the correspondence between the appearance-based algorithms and the observers was poor. This was especially true in the checkerboard experiment, and remained true even after a filling-in algorithm was applied to remove the occlusions. These results demonstrate that appearance-based dimensions alone cannot adequately parameterize the face space used by human observers.

Wilbraham, D. Martinez, A. Christensen, J. Todd, J. (2007). Human face matching performance is robust to task-irrelevant image changes [Abstract]. Journal of Vision, 7(9):890, 890a, http://journalofvision.org/7/9/890/, doi:10.1167/7.9.890. [CrossRef]
Footnotes
 Supported in part by a grant from NIH.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×