August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Statistical learning of movement
Author Affiliations
  • Joan Ongchoco
    Division of Social Science, Yale-NUS College
  • Stefan Uddenberg
    Department of Psychology, Yale University
  • Marvin Chun
    Department of Psychology, Yale University
Journal of Vision September 2016, Vol.16, 1079. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joan Ongchoco, Stefan Uddenberg, Marvin Chun; Statistical learning of movement. Journal of Vision 2016;16(12):1079.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

The environment is dynamic, but objects move in predictable and characteristic ways, whether a dancer in motion, or a bee buzzing around in flight. Complicated sequences of movement are comprised of simpler motion trajectory elements chained together. But how do we know where one trajectory ends and another begins, much like we parse words from continuous streams of speech? As a novel test of statistical learning (Fiser & Aslin, 2002), going beyond prior work that focused on gesture processing and biological motion (e.g., Roseberry et al., 2011), we explored the ability to parse meaningless movement sequences into simpler element trajectories. We developed a basic "alphabet" of basic elements of movement that can be seamlessly strung together into more complicated motion sequences. Across three experiments—in which continuity of motion was steadily increased—observers viewed a single dot as it moved along simple sequences of paths, and were later able to discriminate these sequences from novel ones shown at test. In Experiment 1, a single disc followed a trajectory away from the center and then disappeared; it then reappeared at the center of the video frame and traced another trajectory. In Experiment 2, the disc first traveled away from, and then back toward, the center of the video frame for each trajectory in the sequence, producing a percept of continuous motion around the center. In Experiment 3, the disc traveled around the screen without returning to a given point at any time. With at least 12 participants in each experiment, mean recognition performance for repeated trajectories was significantly above chance at 60%, 69%, and 58% for the three experiments respectively. These results suggest that observers can automatically extract regularities from continuous movement—an ability that may underpin our capacity to learn more complex biological motions, as in sport or dance.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.