Purchase this article with an account.
Manoj Kumar Kumar, Kara D. Federmeier, Li Fei-Fei, Diane M. Beck; Visual And Semantic Representations Of Scenes. Journal of Vision 2014;14(10):1126. doi: 10.1167/14.10.1126.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A long-standing core question that has remained unanswered in cognitive science is: Do different modalities (pictures, words, sounds, smells, tastes and touch) access a common store of semantic information? Although different modalities have been shown to activate a shared network of brain regions, this does not imply a common representation, as the neurons in these regions could process the different modalities in completely different ways. A truer measure of a "common code" across modalities would be a strong similarity of the neural activity evoked by the different modalities. Using multi-voxel pattern analysis (MVPA) we examined the similarity of neural activity across pictures and words. Specifically, we asked if scenes (e.g. a picture of a beach) and related phrases (e.g. "sandy beach") evoke similar patterns of neural activity. In an fMRI experiment, subjects passively viewed blocks of either phrases describing scenes or pictures of scenes, from four different categories: beaches, cities, highways, and mountains. To determine whether the phrases and pictures share a common code, we trained a classifier on one stimulus type (e.g. phrase stimuli) and then tested it on the other stimulus type (e.g. picture stimuli). A whole brain MVPA searchlight revealed multiple brain regions in the occipitotemporal, posterior parietal and frontal cortices that showed transfer from pictures to phrases and from phrases to pictures. This similarity of neural activity patterns across the two input types provides strong evidence of a common semantic code for pictures and words in the brain.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only