August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Neural coding of point-light dynamic objects
Author Affiliations
  • John A. Pyles
    Center for the Neural Basis of Cognition, Carnegie Mellon University
  • Michael J. Tarr
    Center for the Neural Basis of Cognition, Carnegie Mellon University
Journal of Vision August 2014, Vol.14, 1303. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      John A. Pyles, Michael J. Tarr; Neural coding of point-light dynamic objects. Journal of Vision 2014;14(10):1303. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We investigated the role of form information in the neural representation of dynamic objects. Previously, we demonstrated that a large area of higher-level visual cortex (including LOC and hMT+) is recruited during the perception of dynamic objects (Pyles & Tarr, 2013). Moreover, multi-voxel pattern analysis (MVPA) revealed that many regions within higher-level and retinotopic visual cortex encode information about dynamic objects that is invariant across changes in viewpoint, articulation, and size. Our present work extends these findings by investigating the role of form information in the coding of dynamic objects. In two fMRI sessions subjects passively viewed short animations of novel, articulating, dynamic objects: in one session the form of the objects was clearly visible, in another form information was reduced using point-light animations in which only the object's joints were visible. In both conditions subjects saw 80 different example animations (once each) of three dynamic objects which varied across viewpoint, size, and motion path. We used a SVM pattern classifier to identify the objects across the 80 animations both in independently identified regions of interest and in whole-brain searchlights. Despite sparse form information, the point-light condition showed above chance classification for object identity across multiple regions of both higher-level and retinotopic visual cortex. We also examined whether training on the point-light data was sufficient to support identity classification in the form-visible condition, and vice versa. For both analyses, we observed above-chance classification across similar regions of visual cortex. Thus, viewing form-visible animations and form reduced point-light animations of the same objects yields similar patterns of BOLD responses. The ability to decode dynamic object identity from point-light animations, and classify across point-light and form-visible stimuli, suggests that invariant kinematic information about object identity is encoded within a surprisingly wide set of regions within visual cortex.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.