Purchase this article with an account.
Daniel Leeds, David Shutov; Semantic object grouping in the visual cortex seen through MVPA. Journal of Vision 2016;16(12):504. doi: https://doi.org/10.1167/16.12.504.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Visual object perception recruits a network of cortical regions to extract diverse semantic properties from layers of visual information. Regions selective for a few selected object classes, such as faces, places, and hand-writing, have become well-established. However, the cortical encoding of broader semantic properties is subject to ongoing study. Here we use an fMRI voxel searchlight method to compare local cortical responses to 60 visual objects with 218 semantic groupings of the same 60 objects. The semantic groupings are drawn from Palatucci et al. (2009) to capture information about object action, identity, typical-location, tactile-feel, etc. Cortical data are drawn from an earlier study by Leeds et al. (2013). Using representational similarity analysis, we identify a division of labor in semantic representation among mid-level stages of the ventral object perception pathway, particularly involving lateral occipital, fusiform, and inferiortemporal cortex. We find each region is associated with a subset of multiple semantic properties. Identity properties such as "is it a mammal?" or "is it a vehicle?" show particularly strong cortical matches above other properties such as emotion ("is it friendly?"). Our observed semantic-neural matches partially overlap with those reported earlier by Sudre (2012) in MEG, who explored the same set of semantic properties. Differences between our findings may stem from alternate neural coding strategies at different spatial scales, providing complementary perspectives on semantic object groupings in mid-level visual regions in the cortex.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only