June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
Constructing depth information in briefly presented scenes
Author Affiliations
  • Talia Konkle
    Brain and Cognitive Sciences, MIT
  • Elisa McDaniel
    Brain and Cognitive Sciences, MIT, and Neuroscience, Wellesley College
  • Michelle R. Greene
    Brain and Cognitive Sciences, MIT
  • Aude Oliva
    Brain and Cognitive Sciences, MIT
Journal of Vision June 2006, Vol.6, 466. doi:https://doi.org/10.1167/6.6.466
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Talia Konkle, Elisa McDaniel, Michelle R. Greene, Aude Oliva; Constructing depth information in briefly presented scenes. Journal of Vision 2006;6(6):466. https://doi.org/10.1167/6.6.466.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

With only a glance at a novel scene, we can recognize its meaning and estimate its mean volume. Here, we studied how depth layout perception of natural scenes unfolds within this glance: how does three-dimensional content emerge from the two-dimensional input of the visual image? One hypothesis is that depth layout is constructed locally: points close on the two-dimensional image will be more easily distinguishable in depth than points separated by a larger pixel distance. An alternative hypothesis is that depth layout is constructed over the global scene: points lying on the foreground and background surfaces will be distinguishable in depth earlier than surfaces at intermediary distances, independent of their proximity in the two-dimensional image. The method consisted in superimposing two colored target dots on gray level pictures of natural scenes, while participants responded which dot was on the shallowest surface. The location of the two dots was pre-cued and the scene image was displayed for various durations from 40 to 240 ms, and then masked. Results suggest that depth information is available in a coarse-to-fine, scene-based representation: when the two targets have the greatest depth disparity in the scene (irrespective of their pixels distance), participants accurately select the closer surface at shorter presentation times than when the surfaces were nearer in depth. These data support the hypothesis that the representation of depth available at a glance is based on rapidly computed global depth information, rather than local, image-based information.

Konkle, T. McDaniel, E. Greene, M. R. Oliva, A. (2006). Constructing depth information in briefly presented scenes [Abstract]. Journal of Vision, 6(6):466, 466a, http://journalofvision.org/6/6/466/, doi:10.1167/6.6.466. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×