September 2015
Volume 15, Issue 12
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Using structural and semantic voxel-wise encoding models to investigate face representation in human cortex
Author Affiliations
  • Alan Cowen
    Psychology, University of California, Berkeley
  • Samy Abdel-Ghaffar
    Psychology, University of California, Berkeley
  • Sonia Bishop
    Psychology, University of California, Berkeley HWNI Neurosci., University of California, Berkeley
Journal of Vision September 2015, Vol.15, 422. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alan Cowen, Samy Abdel-Ghaffar, Sonia Bishop; Using structural and semantic voxel-wise encoding models to investigate face representation in human cortex. Journal of Vision 2015;15(12):422.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Face perception plays a vital role in human social interaction. Psychologists have previously theorized that a hierarchy of brain regions process low- to high-level visual information about faces. Gallant and colleagues have demonstrated that large-scale stimulus sets and extended data collection can be combined with multi-feature encoding models and use of regularized regression to investigate the neural representation of natural images [1]. Here we applied this approach to conduct a detailed investigation of the voxelwise representation of natural face stimuli within brain regions across all stages of the visual processing stream. Low- and high-level structural (Gabor wavelet pyramid, fiducial point, etc.) and semantic encoding models were fit to estimation data (27 runs each of 4.5min duration, 864 face stimuli from the LFW database, each shown twice) and used to predict voxelwise BOLD responses to novel faces at validation (9 runs of 8min, 99 stimuli, each repeated 11 times). In line with previous findings, the Gabor wavelet pyramid model showed good prediction in early visual cortex. In contrast, semantic face features were primarily encoded in and around face-selective regions (OFA, FFA, pSTS). Structural face features (e.g. location of mouth and eyes) predicted voxelwise responses in early visual regions—in particular, near-foveal early visual cortex—and, to some extent, in high-level face regions. Selectivity differences between classical face-selective regions were more nuanced than previously suggested. This work illustrates how through use of naturalistic face images, extensive data acquisition, and advanced modeling techniques it is possible to interrogate the extent to which the representation of facial features changes across the cortical visual processing stream. [1] T. Naselaris, R. J. Prenger, K. N. Kay, M. Oliver, J. L. Gallant. Bayesian reconstruction of natural images from human brain activity. Neuron 63:902-915 (2009).

Meeting abstract presented at VSS 2015


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.