August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Depth discrimination from occlusions in 3D clutter scenes
Author Affiliations
  • Michael Langer
    School of Computer Science, McGill University
  • Haomin Zheng
    School of Computer Science, McGill University
  • Shayan Rezvankhah
    School of Computer Science, McGill University
Journal of Vision September 2016, Vol.16, 198. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Langer, Haomin Zheng, Shayan Rezvankhah; Depth discrimination from occlusions in 3D clutter scenes. Journal of Vision 2016;16(12):198.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Objects such as trees, shrubs, and tall grass typically consist of thousands of small surfaces that are distributed randomly over a 3D volume. Despite the common occurrence of such 3D clutter in natural scenes, relatively little is known about how well humans can perceive depths of surfaces within such 3D clutter. Previous studies have concentrated on motion parallax and binocular disparity cues and have asked questions such as how many discrete depth planes can be perceived, and what is the depth-to-width ratio. However, these studies are incomplete because they have ignored occlusions which are omnipresent in such scenes. Here we present a depth discrimination experiment that examines occlusion cues directly. The task is to discriminate the depths of two red target surfaces in a 3D field of random gray distractors. The experiment uses an Oculus Rift DK2 display which allows us to control motion parallax and binocular disparity cues. The clutter itself provides two occlusion-based depth cues. The first is a 'visibility cue', namely, the target that is less visible is more likely to be deeper within the clutter [Langer and Mannan, JOSA 2012]. The second is a 'context cue', namely the target with the deepest occluder is itself more likely to be deeper. We define scene classes with all four combinations of visibility and occlusion cues and use staircases to measure depth discrimination thresholds. Our results show that observers use both visibility and context cues to perform the task, even when stereo and motion parallax cues are also present. To our knowledge, this is the first experiment to examine depth from occlusions in 3D clutter and to identify the important role played by visibility and context cues in solving this natural problem.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.