August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Visual and Semantic Neural Representations For Animate and Inanimate Object
Author Affiliations
  • Manoj Kumar
    Neuroscience Program, University of Illinois
  • Kara Federmeier
    Neuroscience Program, University of Illinois
  • Li Fei-Fei
    Department of Computer Science, Stanford University
  • Diane Beck
    Neuroscience Program, University of Illinois
Journal of Vision September 2016, Vol.16, 503. doi:https://doi.org/10.1167/16.12.503
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Manoj Kumar, Kara Federmeier, Li Fei-Fei, Diane Beck; Visual and Semantic Neural Representations For Animate and Inanimate Object. Journal of Vision 2016;16(12):503. https://doi.org/10.1167/16.12.503.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When we view a picture or read a word, we evoke its meaning (semantics) based on prior knowledge of the concept. We previously showed that pictures and words describing scene categories evoke similar representations in the inferior frontal gyrus, precuneus, angular gyrus and ventral visual cortex. Although we know the neural representations of pictures of tools and animals differ, are there similar differences when these concepts are evoked through words? To examine the nature of differences and similarities between these concepts, we examine the neural activity for pictures and words using a classifier to decode the BOLD signal within modalities and across modalities via multivariate pattern analysis (MVPA). We used picture stimuli of animate and inanimate exemplars from twenty-four categories derived from six superordinate classes (big cats, insects, birds, vehicles, tools and fruits). Word stimuli were approximate "captions" of the types of pictures used for the same category (e.g. 'a panther hunting at night'). In the fMRI experiment, subjects passively viewed first all the word stimuli for all twenty-four categories and then the picture stimuli for the same categories. A whole brain MVPA searchlight with six-way classification and a fine-grained four-way sub-class classification was performed. The six-way results from decoding using only the word stimuli revealed a distributed set of brain regions in the left hemisphere, including, the angular gyrus, precuneus, inferior frontal gyrus and putative visual areas in the lateral and ventral occipito-temporal cortex. Picture decoding was more extensive but revealed these same regions. Like natural scenes, we found that we could cross-decode between words and pictures of objects in inferior frontal gyrus, precuneus, angular gyrus, consistent with these regions being category general semantic hubs whereas the ventral visual regions show a differential specificity between the concepts for animals, tools and natural scenes.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×