June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
Predicting point-light actions in real-time
Author Affiliations
  • Markus Graf
    Max Planck Institute for Human Cognitive and Brain Sciences
  • Bianca Reitzner
    Max Planck Institute for Human Cognitive and Brain Sciences
  • Martin Giese
    University Clinic Tübingen, Laboratory for Action Representation and Learning
  • Antonino Casile
    University Clinic Tübingen, Laboratory for Action Representation and Learning
  • Wolfgang Prinz
    Max PIanck Institute for Human Cognitive and Brain Sciences
Journal of Vision June 2006, Vol.6, 793. doi:https://doi.org/10.1167/6.6.793
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Markus Graf, Bianca Reitzner, Martin Giese, Antonino Casile, Wolfgang Prinz; Predicting point-light actions in real-time. Journal of Vision 2006;6(6):793. https://doi.org/10.1167/6.6.793.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Evidence has accumulated for a mirror system in humans which simulates actions of conspecifics (Wilson & Knoblich, 2005). One likely purpose of such a simulation system is to support action prediction. We focused on the time-course of action prediction, investigating whether the prediction of actions involves a real-time simulation process.

We motion-captured a number of human actions and rendered them as point light action sequences. In the experiments, we presented brief videos of human actions, followed by an occluder and a static test stimulus. Both the occluder duration (SOA of 100, 400, or 700 ms) and the distance of the test stimulus to the endpoint of the action sequence (corresponding to 100, 400, or 700 ms) were varied independently. Subjects had to judge whether the test stimulus depicted a continuation of the action in the same orientation, or whether the test stimulus was presented in a different orientation in depth as the previous action sequence.

Prediction accuracy was best when SOA and distance to the endpoint corresponded, i.e. when the test image was a continuation of the sequence that matched the occluder duration. This pattern of results was destroyed when the sequences and test images were inverted (flipped around the horizontal axis). In this case, performance simply deteriorated with increasing distance to the end of the sequence. Overall, our findings suggest that action prediction involves a real-time simulation process. This process can break down when the actions are presented under viewing conditions for which we have little experience.

Graf, M. Reitzner, B. Giese, M. Casile, A. Prinz, W. (2006). Predicting point-light actions in real-time [Abstract]. Journal of Vision, 6(6):793, 793a, http://journalofvision.org/6/6/793/, doi:10.1167/6.6.793. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×