July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Combining depth and motion to detect moving objects in an optic flow field
Author Affiliations
  • Constance Royden
    Department of Mathematics and Computer Science, College of the Holy Cross
  • Laura Webber
    Department of Mathematics and Computer Science, College of the Holy Cross
  • Sean Sannicandro
    Department of Mathematics and Computer Science, College of the Holy Cross
Journal of Vision July 2013, Vol.13, 704. doi:10.1167/13.9.704
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Constance Royden, Laura Webber, Sean Sannicandro; Combining depth and motion to detect moving objects in an optic flow field. Journal of Vision 2013;13(9):704. doi: 10.1167/13.9.704.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When an observer translates through a stationary scene, the object images move in a radial pattern. Theoretically, one can identify moving objects by finding regions in which the image velocities differ in direction or speed from the radial flow field. A computational model that performs motion subtraction, similar to the neural responses in cortical area MT, can identify moving objects using these direction and speed cues (Royden & Holloway, 2009). However, speed is an ambiguous cue, because an increase (or decrease) in speed could arise from motion of an object or from a decrease (or increase) in the object's distance from the observer, causing the model to identify stationary objects as moving. Adding stereo information allows one to distinguish between these possibilities. We added stereo tuning, similar to that seen in MT cells, to the model and tested the model's ability to distinguish between stationary and moving objects for a moving observer. We simulated observer motion toward a frontoparallel plane of random dots positioned 1000 cm from the observer. The scene contained two square objects, one static and one moving laterally at 12 deg/sec. Observer speed was 200 cm/sec with a heading of (0,0). We tested simulated fixation distances of 250, 550 and 850 cm, and object distances of 250, 300, 350, 550 and 850 cm. For object distances of 300 cm and above, on average the model identified 0.67 out of 12 border positions on the static object and 10.5 out of 12 for the moving object. For an object position of 250 cm, the model incorrectly identified the stationary object as moving. Thus, except for the condition with the fastest image speed for the object, the addition of stereo tuning enabled the model to distinguish between moving and stationary objects for a moving observer.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×