August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Dynamic perspective cues enhance depth from motion parallax
Author Affiliations
  • Athena Buckthought
    Department of Ophthalmology, McGill Vision Research, McGill University, Canada
  • Ahmad Yoonessi
    Department of Ophthalmology, McGill Vision Research, McGill University, Canada
  • Curtis L. Baker
    Department of Ophthalmology, McGill Vision Research, McGill University, Canada
Journal of Vision August 2014, Vol.14, 734. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Athena Buckthought, Ahmad Yoonessi, Curtis L. Baker; Dynamic perspective cues enhance depth from motion parallax. Journal of Vision 2014;14(10):734. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Previous studies of depth from motion parallax have employed orthographic rendering of moving random dot textures. Here we examine the effects of more naturalistic motion parallax stimuli using textures with a 1/f spectrum and dynamic perspective rendering. We compared depth perception for orthographic and perspective rendering, using two types of textures: random dot patterns and 1/f Gabor micropatterns. Relative texture motion (shearing) was synchronized to the observer's horizontal head movements and modulated with a low spatial frequency (0.1 cpd), horizontal square wave envelope pattern. The stimulus was presented in a circular window of 36 degrees diameter, at 57 cm viewing distance. Four observers performed a two-alternative forced choice depth-ordering task, in which they reported which modulation half-cycle of the texture appeared in front of the other. In addition, noise thresholds were obtained for depth ordering at a criterion level using a coherence noise task. Furthermore, we examined the effects of removing each of the three types of cues that distinguish dynamic perspective from orthographic rendering: (1) small vertical displacements, (2) lateral gradients of speed across the extent of the square wave modulations, and (3) speed differences in rendered near versus far surfaces. For both textures, depth perception was better with perspective rendering than with orthographic projection. Depth perception systematically declined, with greater differences between the two types of rendering, as rendered depth increased. Similar results were found for naturalistic 1/f textures, but performance was somewhat less than with random dots. Removal of any of the three cues impaired performance, though to different degrees in different individual subjects. In conclusion, depth ordering performance is enhanced by all of the dynamic perspective cues, but is diminished with 1/f textures.


Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.