September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
The neural basis of context-driven object perception
Author Affiliations
  • Talia Brandman
    CIMeC - Center for Mind/Brain Sciences, University of Trento
  • Marius Peelen
    CIMeC - Center for Mind/Brain Sciences, University of Trento
Journal of Vision September 2015, Vol.15, 608. doi:https://doi.org/10.1167/15.12.608
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Talia Brandman, Marius Peelen; The neural basis of context-driven object perception. Journal of Vision 2015;15(12):608. https://doi.org/10.1167/15.12.608.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Considerable evidence points to a division of scene and object processing into two distinct neural pathways, relying on different types of visual cues. However, scene and object perception may also interact, as demonstrated by contextual effects of background on object perception. At present, the neural underpinnings of scene-object interaction remain unknown. Here we asked how visual context shapes the neural representation of objects in real-world scenes. We presented subjects with contextually-defined objects, created by degrading the object such that it is identifiable only within its original scene context. Using fMRI and MEG in two studies, we examined the neural representation of object animacy in contextually-defined objects relative to degraded objects alone and scenes alone. An animacy localizer included animate and inanimate intact objects with no background. A linear classifier was trained to discriminate animate versus inanimate intact objects and then tested on animacy discrimination in contextually-defined objects, degraded objects alone and scenes alone. In the fMRI study, applying this multivariate approach in a searchlight analysis uncovered above-chance decoding of object animacy for contextually-defined objects, which was significantly stronger than for degraded objects alone or scenes alone, in an extrastriate visual area of the right occipital-temporal cortex. In addition, we present the results of a connectivity analysis correlating the response pattern in this region with the fMRI signal across the brain. In the MEG study, using similar cross-decoding, multivariate analysis of sensors revealed above-chance decoding of object animacy for contextually-defined objects, significantly more than for degraded objects alone or scenes alone, peaking at 300 ms from stimulus onset. In sum, our results provide the first evidence for shaping of neural representation of objects by scene context, suggesting that the category of an unidentifiable object is disambiguated by contextual scene cues after 300 ms and is represented in the extrastriate visual cortex.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×