September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Lateral occipitotemporal cortex's selectivity to small artifacts reflects multi-modal representation of shape-grasp mapping elements
Author Affiliations
  • Wei Wu
    State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
  • Xiaoying Wang
    State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
  • Chenxi He
    State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
  • Yanchao Bi
    State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University
Journal of Vision August 2017, Vol.17, 279. doi:https://doi.org/10.1167/17.10.279
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wei Wu, Xiaoying Wang, Chenxi He, Yanchao Bi; Lateral occipitotemporal cortex's selectivity to small artifacts reflects multi-modal representation of shape-grasp mapping elements. Journal of Vision 2017;17(10):279. https://doi.org/10.1167/17.10.279.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent studies have reported intriguingly similar activation preference to small artifacts relative to other object categories in the left lateral occipitotemporal cortex (lLOTC) across various modality and populations (see reviews in Riciardi et al., 2014; Bi et al., 2016). What drives the multimodal tool selectivity here is unclear. Our study investigated the potential properties underlying the multimodal small artifact selectivity in the lLOTC using representational similarity analysis (RSA). BOLD-fMRI responses to 33 small artifacts were collected for both sighted and congenitally blind individuals when they performing size judgment tasks on object auditory names or pictures. Similarity ratings on the overall shape, the shape of the object parts people typically interact with (i.e., when grasping for typical use), the manner of manipulating and of grasping were collected to build 4 different behavioral representational similarity matrices (RSMs). RSA identified significant correlation between functionally-defined lLOTC's neural RSM and the grasping-manner and grasp-part-shape RSMs across all experiments (Rs > 0.109; ps < 0.012). Furthermore, the shared variance of these two variables derived from principal component analyses significantly correlated with lLOTC's neural RSM across all experiments (sighted auditory: r = 0.129, P < 0.01; sighted visual: r = 0.215, P < 10-6; blind: r = 0.124, P < 0.01 ). The unique effects of either of these two variables, as well as the effects of overall-shape and overall-manipulation manner, were observed in the sighted visual experiment and not the blind auditory experiment(Rs < 0.07; ps > 0.127), i.e., not exhibiting multi-modal patterns. These results indicate that the representation of the shape element that is indicative of the manner of grasping best explains the multi-modal representation of small artifacts in lLOTC, highlighting the critical role of interaction between visual and nonvisual object properties on the functional organization of the higher-order visual cortex (Bi et al., 2016).

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×