Journal of Vision Cover Image for Volume 17, Issue 10
September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Low and high level features explain neural response tuning during action observation
Author Affiliations
  • Leyla Tarhan
    Department of Psychology, Harvard University
  • Talia Konkle
    Department of Psychology, Harvard University
Journal of Vision August 2017, Vol.17, 989. doi:https://doi.org/10.1167/17.10.989
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leyla Tarhan, Talia Konkle; Low and high level features explain neural response tuning during action observation. Journal of Vision 2017;17(10):989. https://doi.org/10.1167/17.10.989.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Among humans' cognitive faculties, the ability to process others' actions is essential. We can recognize the meaning behind running, eating, and finer movements like tool use. How does the visual system process and transform information about actions? To explore this question, we collected 120 action videos, spanning a range of every-day activities sampled from the American Time Use Survey. Next, we used behavioral ratings and computational approaches to measure how these videos vary within three distinct feature spaces: visual shape features ("gist"), kinematic features (e.g., body parts involved), and intentional features (e.g., used to communicate). Finally, using fMRI, we obtained neural responses for each of these 2.5s action clips in 9 participants. To analyze the structure in these neural responses, we used an encoding-model approach (Mitchell et al., 2008) to fit tuning models for each voxel along each feature space, and assess how well each model predicts responses to individual actions. We found that a large proportion of cortex along the intraparietal sulcus and occipitotemporal surface was moderately well fit by all three models (median r=0.23-0.31). In a leave-two-out validation procedure, all three models could accurately classify between two action videos in ventral and dorsal stream sectors (65-80%, SEM=1.1%-2.6%). In addition, we observed a significant shift in classification accuracy between early visual cortex (EVC) and higher-level visual cortex: the gist model best in early visual cortex, whereas the high-level models out-performed gist in occipito-temporal and parietal regions. These results demonstrate action representations can be successfully predicted using an encoding-model approach. More broadly, the pattern of fits for different feature models reveals that visual information is transformed from low- to high-level representational spaces in the course of action processing. These findings begin to formalize the progression of the kinds of action information being processed along the visual stream.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×