August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Intersubject similarity of mulitvoxel codes in perirhinal cortex reflects the typicality of visual objects
Author Affiliations
  • Amy Price
    Center for Cognitive Neuroscience, University of Pennsylvania
  • Michael Bonner
    Center for Cognitive Neuroscience, University of Pennsylvania
  • Jonathan Peelle
    Department of Otolaryngology, Washington University in St. Louis
  • Murray Grossman
    Center for Cognitive Neuroscience, University of Pennsylvania
Journal of Vision September 2016, Vol.16, 1430. doi:https://doi.org/10.1167/16.12.1430
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Amy Price, Michael Bonner, Jonathan Peelle, Murray Grossman; Intersubject similarity of mulitvoxel codes in perirhinal cortex reflects the typicality of visual objects. Journal of Vision 2016;16(12):1430. https://doi.org/10.1167/16.12.1430.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ventral visual pathway transforms perceptual inputs of objects into increasingly complex representations, and its highest stages are thought to contain abstract semantic codes. A key function of these semantic codes is to provide a common understanding of visual objects across individuals. For example, my stored knowledge of the familiar object "red apple" should be similar to yours if we are to communicate and coordinate our behaviors. This predicts a specific functional architecture: neural codes of visual-semantic regions are structured to provide a common ground between observers of the visual world. Here we tested for a key signature of this proposed architecture by: 1) identifying regions encoding high-level object meaning and 2) testing whether inter-subject similarity in these regions tracks object meaning. During fMRI, subjects viewed objects created from combinations of shapes (apples, leaves, roses) and colors (red, green, pink, yellow, blue) while performing an unrelated target-detection task. For each object set, we created a semantic-similarity model based on the co-occurrence frequencies of color-object combinations (e.g., "yellow apple") from a large lexical corpus (Fig-1A). These models were orthogonal to perceptual models for shape or color alone. Using representational similarity analysis, we found that perirhinal cortex was the only region that significantly correlated with the semantic-similarity model (p< 0.01; Fig-1B). Next, we hyper-aligned each subject's data to a common, high-dimensional space in a series of anatomic regions. We predicted that in visual-semantic regions, inter-subject similarity would be related to the semantic typicality of the objects. Indeed, we found that perirhinal cortex was unique in containing population codes for which inter-subject similarity increased with object typicality (Fig-1C&D). Our results suggest that high-level regions at the interface of vision and memory encode combinatorial information that underlies real-world knowledge of visual objects and may instantiate a neural "common ground" for object meaning across individuals.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×