September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Representational Similarity of Actions in the Human Brain
Author Affiliations
  • Ayse Saygin
    Department of Cognitive Science, University of California, San Diego, CA, USA
    Neurosciences Program, University of California, San Diego, CA, USA
  • Burcu Urgen
    Department of Cognitive Science, University of California, San Diego, CA, USA
    Department of Neuroscience, Universita degli Studi di Parma, Italy
  • Selen Pehlivan
    Department of Computer Science, TED University, Ankara, Turkey
Journal of Vision August 2017, Vol.17, 1268. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ayse Saygin, Burcu Urgen, Selen Pehlivan; Representational Similarity of Actions in the Human Brain. Journal of Vision 2017;17(10):1268.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In the primate brain, visual perception of actions is supported by a distributed network of regions in occipital, temporal, parietal, and premotor areas. The representational properties of each of the regions involved in visual action processing remain to be specified. Here, we investigated the representational content of these regions using fMRI with representational similarity analyses (RSA), along with computer vision-based modeling of the stimuli. Participants viewed 2-second video clips of three agents performing eight different actions during fMRI scanning. We computed the representational dissimilarity matrices (RDMs) for each brain region of interest, and compared these with two different sets of computational model representations constructed based on visual and semantic attributes. We found that different nodes of the action processing network have different representational properties. Posterior STS, known to be a key visual area for processing body movements and actions, appears to represent high-level visual features such as movement kinematics. As expected based on prior research and theory on mirror neurons, as well as computational models of biological motion perception and action recognition, representations became more abstract higher in the hierarchy; e.g., our results suggest inferior parietal cortex represents aspects such as action category, intention, and target of the action. Taken together with prior theory, empirical work, and computational modeling, we conclude that during visual processing of actions, pSTS pools information from downstream visual areas to compute/represent movement kinematics, which are then passed on to nodes of the action processing network in parietal and frontal regions coding higher-order/semantic aspects.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.