August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Motion from structure
Author Affiliations
  • Benjamin Backus
    Graduate Center for Vision Research, SUNY College of Optometry
  • Baptiste Caziot
    Graduate Center for Vision Research, SUNY College of Optometry
Journal of Vision August 2012, Vol.12, 774. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Backus, Baptiste Caziot; Motion from structure. Journal of Vision 2012;12(9):774.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Surfaces that have different disparity in a static stereogram appear to move relative to one another when the observer moves relative to the stereogram. How does a stimulus with no moving parts give rise to apparent motion? At one level of explanation, a "motion from structure" (MFS) inference occurs because, in a real scene, the absence of relative motion (e.g. dynamic occlusion) in the proximal stimulus requires that surfaces move relative to one another. What mechanism(s) are responsible for this inference? MFS looks smooth and is visible for minute head movements, suggesting that it may be supported by a dedicated mechanism that combines 2D image motion (including zero-velocity motion) with represented depth structure to estimate 3D object motion per se. Extra-retinal signals might play a role. We conducted experiments in which observers translated their heads (45 cm side-to-side, 0.5 Hz oscillation) and adjusted the speed (gain) of a position-yoked figure that had crossed disparity a stationary background. Stimuli were dense RDS projected onto a screen at 200cm (60 Hz per eye, field sequential). The square was 46 cm wide at eye height, the observer standing. For both of two observers, and across four disparities (8, 16, 24, and 32 arcmin, or 14, 26, 37, and 46 cm in front of the screen, respectively), motion gain settings (on-screen motion/head motion) were consistently close to 50% of the prediction from geometry as specified by binocular disparity. However, apparent depths averaged 83% of the depth specified by disparity, so gain settings were also less than predicted from apparent depth. Accordingly, real stationary objects were positioned in front of the screen; they appeared to move against the head. Additional experiments presented stimuli against a blank background or moving relative to a stationary head. No single model fitted all data but several lawful principles emerge.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.