Purchase this article with an account.
Ariana Familiar, Heath Matheson, Sharon Thompson-Schill; Representation of visual and motor object features in human cortex. Journal of Vision 2017;17(10):285. doi: https://doi.org/10.1167/17.10.285.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To accomplish object recognition, we must remember the shared sensorimotor features of thousands of objects, as well as each object's unique combination of features. While theories differ on how exactly the brain does this, many agree that featural information is integrated in at least one cortical region, or "convergence zone", which acts as a semantic representation area that links object features of different information types. Moreover, it has been posited the anterior temporal lobe (ATL) acts as a "hub" that associates object features across sensory and motor modalities, as it is reciprocally connected to early modality-specific cortical regions and patients with ATL damage have shown deficits in processing and remembering object information across input modalities (Patterson et al., 2007). Our lab recently found evidence that the left ATL encodes integrated shape and color information for objects uniquely defined by these features (fruits/vegetables; Coutanche & Thompson-Schill, 2014), suggesting ATL acts as a convergence zone for these visual object features. However, whether ATL encodes integrated object features from different modalities had not been established. We used functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to examine whether ATL acts as an area of convergence for object features across visual and motor modalities. Using a whole-brain searchlight analysis, we found activity patterns during a memory retrieval task in a region within the left ATL could successfully classify objects defined by unique combinations of visual (material) and motor (grip) features, but could not classify either constituent feature while generalizing over identity. These results suggest that in addition to being a convergence zone for visual object features, left ATL also acts as an area of convergence for object information across visual and motor modalities.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only