September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Decoding identity and action properties of tools for viewing and pantomiming
Author Affiliations
  • Stephanie Rossit
    School of Psychology, University of East Anglia, Norwich, UK
  • Diana Tonin
    School of Psychology, University of East Anglia, Norwich, UK
  • Fraser Smith
    School of Psychology, University of East Anglia, Norwich, UK
Journal of Vision September 2018, Vol.18, 426. doi:10.1167/18.10.426
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stephanie Rossit, Diana Tonin, Fraser Smith; Decoding identity and action properties of tools for viewing and pantomiming. Journal of Vision 2018;18(10):426. doi: 10.1167/18.10.426.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In our everyday life we often encounter, manipulate and utilize many different tools. Several neuroimaging studies have identified a network of fronto-parietal and occipito-temporal regions that are consistently activated when viewing, imagining and pantomiming tool actions. However, it remains unclear what properties are represented within each region and how these representations overlap or change according to the task used. Here we used multivoxel pattern analysis to investigate the representation of identity and action properties for viewing tools and pantomiming tool-use tasks. Participants (N = 18) viewed pictures of tools (while performing a 1-back repetition detection task) and executed pantomimes of tools actions in response to tool names in different runs. We used familiar tool categories that varied according to two action properties: hand grip (power vs. precision) and hand movement (squeeze vs. rotation). In addition, for each participant separate localizer runs were used to define regions of interest. For both viewing and pantomiming, we found reliable tool-identity decoding in lateral occipital temporal cortex (LOTC), posterior middle temporal gyrus (pMTG), supramarginal gyrus (SMG) and intraparietal sulcus (IPS). Grip type was significantly decoded in LOTC, tool-selective IPS and dorsal premotor (PMd) cortices for both tasks. In addition, movement type was significantly decoded for both tasks in LOTC, pMTG, IPS, SMG, ventral and dorsal PM cortices, and strikingly even in primary motor and somatosensory cortices. These results suggest that areas of both visual streams (LOTC, IPS) encode information about identity and action properties of tools and are in line with claims that viewing tools automatically evokes motor-related representations associated with their use. Finally, cross-task decoding was found in SMG for tool identity and in PMd for grip type suggesting that these regions contain abstract action representations independent of task.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×