September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Hierarchical motion structure is employed by humans during visual perception
Author Affiliations & Notes
  • Johannes Bill
    Harvard University
  • Hrag Pailian
    Harvard University
  • Samuel J Gershman
    Harvard University
  • Jan Drugowitsch
    Harvard University
Journal of Vision September 2019, Vol.19, 282. doi:https://doi.org/10.1167/19.10.282
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Johannes Bill, Hrag Pailian, Samuel J Gershman, Jan Drugowitsch; Hierarchical motion structure is employed by humans during visual perception. Journal of Vision 2019;19(10):282. https://doi.org/10.1167/19.10.282.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Making sense of the hierarchical arrangement of form and motion is central to visual scene perception. For example, while driving, other vehicles’ locations must be anticipated from the traffic flow even if they are temporarily occluded. Despite its ubiquity in everyday reasoning, surprisingly little is known about how exactly humans and animals employ motion structure knowledge when perceiving dynamic scenes. To investigate this question, we propose a formal framework for characterizing structured motion and generating structured motion-stimuli, which supports a wide range of hierarchically arranged real-world motion relations among stimulus features. A key benefit is that the joint distribution of generated stimulus trajectories is analytically tractable, which allowed us to compare human performance to ideal observers. To do so, we first introduced structured motion in the well-established multiple object tracking task. We found that humans performed better in conditions with structured than independent object motion, indicating that they benefitted from structured motion. A Bayesian observer model furthermore revealed that the observed performance gain is not due to the stimulus itself becoming simpler, but due to active use of motion structure knowledge during inference. A second experiment, in which trajectories of occluded objects had to be predicted from the remaining visible objects, provided a fine-grained insight into which exact structure human predictions relied on in the face of uncertainty: Bayesian model comparison suggests that humans employed the correct or close-to-correct motion structure, even for deep motion hierarchies. Overall, we demonstrated – to our knowledge – for the first time that humans can make use of hierarchical motion structure when perceiving dynamic scenes, and flexibly employ close-to-optimal motion priors. Our proposed formal framework is compatible with existing neural network models of visual tracking, and can thus facilitate theory-driven designs of electrophysiology experiments on motion representation along the visual pathway.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×