August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Visual And Semantic Representations Of Scenes
Author Affiliations
  • Manoj Kumar Kumar
    Neuroscience Program, University of Illinois
  • Kara D. Federmeier
    Neuroscience Program, University of Illinois
  • Li Fei-Fei
    Department of Computer Science, Stanford University
  • Diane M. Beck
    Neuroscience Program, University of Illinois
Journal of Vision August 2014, Vol.14, 1126. doi:https://doi.org/10.1167/14.10.1126
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Manoj Kumar Kumar, Kara D. Federmeier, Li Fei-Fei, Diane M. Beck; Visual And Semantic Representations Of Scenes. Journal of Vision 2014;14(10):1126. https://doi.org/10.1167/14.10.1126.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A long-standing core question that has remained unanswered in cognitive science is: Do different modalities (pictures, words, sounds, smells, tastes and touch) access a common store of semantic information? Although different modalities have been shown to activate a shared network of brain regions, this does not imply a common representation, as the neurons in these regions could process the different modalities in completely different ways. A truer measure of a "common code" across modalities would be a strong similarity of the neural activity evoked by the different modalities. Using multi-voxel pattern analysis (MVPA) we examined the similarity of neural activity across pictures and words. Specifically, we asked if scenes (e.g. a picture of a beach) and related phrases (e.g. "sandy beach") evoke similar patterns of neural activity. In an fMRI experiment, subjects passively viewed blocks of either phrases describing scenes or pictures of scenes, from four different categories: beaches, cities, highways, and mountains. To determine whether the phrases and pictures share a common code, we trained a classifier on one stimulus type (e.g. phrase stimuli) and then tested it on the other stimulus type (e.g. picture stimuli). A whole brain MVPA searchlight revealed multiple brain regions in the occipitotemporal, posterior parietal and frontal cortices that showed transfer from pictures to phrases and from phrases to pictures. This similarity of neural activity patterns across the two input types provides strong evidence of a common semantic code for pictures and words in the brain.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×