July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
A Neurocomputational Basis for Face Configural Effects
Author Affiliations
  • Irving Biederman
    Psychology, University of Southern California\nNeuroscience, University of Southern California
  • Xiaokun Xu
    Psychology, University of Southern California
Journal of Vision July 2013, Vol.13, 1113. doi:https://doi.org/10.1167/13.9.1113
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Irving Biederman, Xiaokun Xu; A Neurocomputational Basis for Face Configural Effects. Journal of Vision 2013;13(9):1113. https://doi.org/10.1167/13.9.1113.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The representation of faces is said to be configural. But what could "configural" possibly mean in neurocomputational terms? If the coding of faces retains aspects of the original multiscale, multiorientation tuning characteristic of early visual stages (with allowance for translation and size invariance), then configural effects could be explained merely by the coding by medium and large scale V1-type kernels whose receptive fields are well modeled as Gabor filters. These kernels would cover large regions of the face independent of whether the variation in contrast was derived from part shapes, part distances, or the subtle sculpting of the smooth surfaces. Because of the overlap in receptive fields of these filters, alteration of a single shape of a part would not only affect a column of Gabors varying in scale and orientation (termed a Gabor "jet") whose r.f.s are centered on that part, but Gabors centered at distant parts of the face. We used the von der Malsburg Gabor-jet system (Lades et al., 1993), which captures essential aspects of hypercolumn V1 coding, to model the paradigmatic configural effect in faces (Tanaka & Farah, 1993): After learning a set of faces (generated from IdentiKit parts), recognition of the whole face, against a distractor differing only in the shape of a single face part (e.g., the nose), was more accurate than the recognition of that part in isolation. The Gabor-jet model yielded the same ordering, producing greater dissimilarity for whole faces compared to the single parts distinguishing those faces. The psychophysical discriminability of pairs of faces is almost perfectly predictable from a Gabor-jet similarity metric but the same capacity that allows for the coding of fine metric differences renders face individuation susceptible to inversion or contrast reversal.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×