July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Complex object representations in the medial temporal lobe: Feature conjunctions and view invariance
Author Affiliations
  • Jonathan Erez
    Department of Psychology, University of Toronto, Toronto, ON, Canada
  • Rhodri Cusack
    Department of Psychology, University of Western Ontario, London, ON, Canada\nThe Brain and Mind Institute, London, ON, Canada
  • Will Kendall
    Department of Psychology, University of Toronto, Toronto, ON, Canada
  • Morgan Barense
    Department of Psychology, University of Toronto, Toronto, ON, Canada\nRotman Research Institute, Toronto, ON, Canada
Journal of Vision July 2013, Vol.13, 783. doi:https://doi.org/10.1167/13.9.783
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jonathan Erez, Rhodri Cusack, Will Kendall, Morgan Barense; Complex object representations in the medial temporal lobe: Feature conjunctions and view invariance. Journal of Vision 2013;13(9):783. https://doi.org/10.1167/13.9.783.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The medial temporal lobe (MTL) is known to be vital for memory function. However, recent studies have shown that a specific set of brain structures within the MTL are also important for perception. For example, studies of amnesic patients with damage to MTL structures indicated that these patients performed poorly on perceptual tasks, specifically, when discriminating between items that shared overlapping features. It was suggested that one MTL structure in particular, the perirhinal cortex (PRC), should be considered part of the representational hierarchy in the ventral visual stream (VVS) and is responsible for representing the complex conjunction of features that comprise objects, perhaps at a view-invariant level. In this study we investigated how the different features comprising complex objects are represented throughout the VVS up to and including the MTL, and at what stage the representations become view-invariant. To address these questions, we used multi-voxel pattern analysis (MVPA) of fMRI data, a technique that has gained prominence for its ability to probe the underlying neural representations of visual information. Participants completed a one-back task involving novel objects that were comprised of either one (e.g., A, B, or C), two (e.g., AB, AC, BC), or three features (e.g., ABC) and were presented from one of two possible viewpoints. This allowed us to examine the degree to which neural representation of a pair of objects depended only on the sum of their parts (i.e., A+BC=AB+C), or whether the specific feature conjunctions within objects were encoded. A searchlight analysis using this method indicated that anterior regions of the VVS, including the PRC, coded the complex conjunctions of features comprising the objects, over and above the individual features themselves. Moreover, we found evidence to suggest that the conjunctive representations in PRC were view-invariant.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×