Purchase this article with an account.
Amy Price, Michael Bonner, Jonathan Peelle, Murray Grossman; Intersubject similarity of mulitvoxel codes in perirhinal cortex reflects the typicality of visual objects. Journal of Vision 2016;16(12):1430. doi: 10.1167/16.12.1430.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
The ventral visual pathway transforms perceptual inputs of objects into increasingly complex representations, and its highest stages are thought to contain abstract semantic codes. A key function of these semantic codes is to provide a common understanding of visual objects across individuals. For example, my stored knowledge of the familiar object "red apple" should be similar to yours if we are to communicate and coordinate our behaviors. This predicts a specific functional architecture: neural codes of visual-semantic regions are structured to provide a common ground between observers of the visual world. Here we tested for a key signature of this proposed architecture by: 1) identifying regions encoding high-level object meaning and 2) testing whether inter-subject similarity in these regions tracks object meaning. During fMRI, subjects viewed objects created from combinations of shapes (apples, leaves, roses) and colors (red, green, pink, yellow, blue) while performing an unrelated target-detection task. For each object set, we created a semantic-similarity model based on the co-occurrence frequencies of color-object combinations (e.g., "yellow apple") from a large lexical corpus (Fig-1A). These models were orthogonal to perceptual models for shape or color alone. Using representational similarity analysis, we found that perirhinal cortex was the only region that significantly correlated with the semantic-similarity model (p< 0.01; Fig-1B). Next, we hyper-aligned each subject's data to a common, high-dimensional space in a series of anatomic regions. We predicted that in visual-semantic regions, inter-subject similarity would be related to the semantic typicality of the objects. Indeed, we found that perirhinal cortex was unique in containing population codes for which inter-subject similarity increased with object typicality (Fig-1C&D). Our results suggest that high-level regions at the interface of vision and memory encode combinatorial information that underlies real-world knowledge of visual objects and may instantiate a neural "common ground" for object meaning across individuals.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only