August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Retinotopic priors for eyes and mouth in face perception and face sensitive cortex
Author Affiliations
  • Benjamin de Haas
    Institute of Cognitive Neuroscience, University College London
  • D. Samuel Schwarzkopf
    Division of Psychology and Language Sciences, University College London
  • Ivan Alvarez
    Institute of Child Health, University College London
  • Linda Henriksson
    MRC Cognition and Brain Sciences Unit, Cambridge
  • Nikolaus Kriegeskorte
    MRC Cognition and Brain Sciences Unit, Cambridge
  • Geraint Rees
    Institute of Cognitive Neuroscience, University College London
Journal of Vision August 2014, Vol.14, 207. doi:https://doi.org/10.1167/14.10.207
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin de Haas, D. Samuel Schwarzkopf, Ivan Alvarez, Linda Henriksson, Nikolaus Kriegeskorte, Geraint Rees; Retinotopic priors for eyes and mouth in face perception and face sensitive cortex. Journal of Vision 2014;14(10):207. https://doi.org/10.1167/14.10.207.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Gaze patterns towards faces typically concentrate in a region that includes eyes and mouth as upper and lower boundaries (e.g. van Belle et al., 2010). This implies a natural retinotopic bias– eyes will appear more often in the upper than lower visual field and vice versa for mouths. We asked whether this bias is reflected in perceptual sensitivity and cortical processing of face features. In a behavioral experiment we tested whether recognition performance for eyes and mouths varied with retinotopic location. In each trial healthy human participants (n=18) saw a brief (200 ms) image of a single eye or mouth, accompanied by a noise mask. Recognition performance was tested in a match-to-sample task. In a canonical condition eye and mouth stimuli were presented in typical upper and lower visual field locations while in a second condition these locations were reversed. We found strong evidence for the predicted feature by location interaction (F=21.87, P<0.001). Recognition of eyes was significantly better for upper vs. lower visual field locations (t=3.34, P<0.01) while the reverse was true for mouth recognition (t=3.40, P<0.01). We speculated this might reflect a correlation between spatial and feature preferences of neural populations in face sensitive cortex. Based on this hypothesis we performed an, fMRI experiment (n=21) using identical stimuli. Preliminary results indicate that patterns evoked by eyes vs. mouths were separable significantly better than chance in inferior occipital gyrus (IOG) and fusiform face area (FFA) of either hemisphere. Crucially, separability of patterns was significantly better for the canonical condition in right IOG (t=2.20, P<0.05) and a similar trend was observed for right FFA (t=1.92, P=0.07). These results indicate that sensitivity to face features is spatially heterogeneous across the visual field and in human face-sensitive cortex. Face feature sensitivity thus likely reflects input statistics.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×