December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Detecting Object Motion During Self Motion
Author Affiliations
  • Hope Lutwak
    New York University
  • Kathryn Bonnen
    New York University
    Indiana University
  • Eero Simoncelli
    New York University
    Flatiron Institute
Journal of Vision December 2022, Vol.22, 3235. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hope Lutwak, Kathryn Bonnen, Eero Simoncelli; Detecting Object Motion During Self Motion. Journal of Vision 2022;22(14):3235.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

As we move through the world, the changing pattern of light projected on our eyes is complex and varied, yet somehow we are able to distinguish between moving objects and stationary scenery. When an eye translates and rotates within a rigid 3D world, the velocity at each location on the retina is constrained to a line segment in the space of all 2D velocities (Longuet, Higgins, Prazdny 1980). The slope and intercept of this segment is determined by the eye’s translational and rotational movement, and the position along the segment is determined by the depth in the scene at that location. Since this line segment describes velocities for a rigid world, velocities not on the segment must correspond to independently moving objects. We tested the hypothesis that humans make use of these constraints, by partially inferring their 3D self-motion from the global pattern of retinal velocities, and using deviations of local velocity from the resulting constraint lines to detect independently moving objects. Stimuli consisted of a jittered array of drifting plaids (two 4 cycle/degree gratings within a 0.5 degree diameter aperture) on a 35x20 degree screen. We simulated the retinal velocities consistent with an observer moving forward and upward while fixating ahead, for two depth maps: a flat ground plane, and a radar-measured outdoor scene (Burge & Geisler, 2011). Out of two predetermined locations, participants had to detect a perturbation in the velocity field. We compared sensitivities to changes in velocity to those for a control stimulus, a reduced version of the full stimulus with only 8 surrounding patches. We found that sensitivities did not significantly differ between the full stimulus and control. These results suggest that participants’ sensitivities to deviations in velocities in our task were not shaped by rigid 3D world constraints but rather by local retinal velocities.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.