September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Decoding observed actions at the subordinate, basic and superordinate level
Author Affiliations & Notes
  • Tonghe Zhuang
    Chair of Cognitive Neuroscience, Institute of Psychology, University of Regensburg, Germany
  • Angelika Lingnau
    Chair of Cognitive Neuroscience, Institute of Psychology, University of Regensburg, Germany
  • Footnotes
    Acknowledgements  This project was supported by the German Research Foundation. Tonghe Zhuang was funded by a PhD stipend from the Chinese Scholarship Council.
Journal of Vision September 2021, Vol.21, 2043. doi:https://doi.org/10.1167/jov.21.9.2043
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tonghe Zhuang, Angelika Lingnau; Decoding observed actions at the subordinate, basic and superordinate level. Journal of Vision 2021;21(9):2043. https://doi.org/10.1167/jov.21.9.2043.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Similar to objects, actions can be described at different hierarchical levels, ranging from very broad (e.g., locomotion) to very specific (e.g., breaststroke) information. Here we aimed to determine distinct representations of observed actions at three different levels of abstraction (superordinate, basic, and subordinate) in the human brain. To address this question, we conducted an fMRI study (3T; voxel resolution 2.5*2.5*2.5, TR= 2s, multiband sequence, acceleration factor 3) in which we presented N = 23 participants with static images of twelve different actions (six exemplars each) that were divided into three superordinate, six basic and twelve subordinate action categories. Participants were instructed to view the images, and to perform a category verification task during occasional catch trials, with an equal proportion of questions for each of the three taxonomic levels. Multivariate pattern analysis was carried out on t-values resulting from a general-linear model analysis, using a linear discriminate analysis (LDA) classifier and independent exemplar cross validation. To be able to compare results between the three taxonomic levels, decoding accuracy was normalized to account for the differences in chance level. A ROI-based analysis revealed that normalized decoding accuracy for the distinction between observed actions was higher at the subordinate in comparison to the superordinate level in V1, right superior parietal lobule (SPL) and right premotor cortex. By contrast, decoding accuracy in the right lateral occipitotemporal cortex (LOTC) and the left SPL was higher at the basic level than the superordinate level. Furthermore, the whole-brain searchlight analysis revealed peaks in the right inferior lateral occipital cortex (LOC), the left temporal occipital fusiform cortex and the right superior LOC for the subordinate, basic and superordinate level, respectively. Together, our results are in line with the view that observed actions can be decoded at all three taxonomic levels in high-level visual cortex.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×