Purchase this article with an account.
Markus Graf, Bianca Reitzner, Martin Giese, Antonino Casile, Wolfgang Prinz; Predicting point-light actions in real-time. Journal of Vision 2006;6(6):793. doi: https://doi.org/10.1167/6.6.793.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Evidence has accumulated for a mirror system in humans which simulates actions of conspecifics (Wilson & Knoblich, 2005). One likely purpose of such a simulation system is to support action prediction. We focused on the time-course of action prediction, investigating whether the prediction of actions involves a real-time simulation process.
We motion-captured a number of human actions and rendered them as point light action sequences. In the experiments, we presented brief videos of human actions, followed by an occluder and a static test stimulus. Both the occluder duration (SOA of 100, 400, or 700 ms) and the distance of the test stimulus to the endpoint of the action sequence (corresponding to 100, 400, or 700 ms) were varied independently. Subjects had to judge whether the test stimulus depicted a continuation of the action in the same orientation, or whether the test stimulus was presented in a different orientation in depth as the previous action sequence.
Prediction accuracy was best when SOA and distance to the endpoint corresponded, i.e. when the test image was a continuation of the sequence that matched the occluder duration. This pattern of results was destroyed when the sequences and test images were inverted (flipped around the horizontal axis). In this case, performance simply deteriorated with increasing distance to the end of the sequence. Overall, our findings suggest that action prediction involves a real-time simulation process. This process can break down when the actions are presented under viewing conditions for which we have little experience.
This PDF is available to Subscribers Only