August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Can 3D Shape be Estimated from Focus Cues Alone?
Author Affiliations
  • Rachel A. Albert
    Vision Science Graduate Group, UC Berkeley, Berkeley CA 94720
  • Abdullah Bulbul
    Vision Science Graduate Group, UC Berkeley, Berkeley CA 94720
  • Rahul Narain
    Department of Computer Science, UC Berkeley, Berkeley CA 94720
  • James F. O'Brien
    Department of Computer Science, UC Berkeley, Berkeley CA 94720
  • Martin S. Banks
    Vision Science Graduate Group, UC Berkeley, Berkeley CA 94720
Journal of Vision August 2014, Vol.14, 732. doi:https://doi.org/10.1167/14.10.732
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rachel A. Albert, Abdullah Bulbul, Rahul Narain, James F. O'Brien, Martin S. Banks; Can 3D Shape be Estimated from Focus Cues Alone?. Journal of Vision 2014;14(10):732. https://doi.org/10.1167/14.10.732.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Focus cues—blur and accommodation—have generally been regarded as very coarse, ordinal cues to depth. This assessment has been largely determined by the inability to display these cues correctly with conventional displays. For example, when a 3D shape is displayed with sharp rendering (i.e., pinhole camera), the expected blur variation is not present and accommodation does not have an appropriate effect on the retinal image. When a 3D shape with rendered blur (i.e., camera with non-pinhole aperture) is displayed, the viewer's accommodation does not have the appropriate retinal effect. We asked whether the information provided by correct blur and accommodation can be used to determine shape. We conducted a shape-discrimination experiment in which subjects indicated whether a hinge stimulus was concave or convex. The stimuli were presented monocularly in a unique volumetric display that allows us to present correct or nearly correct focus cues. The hinge was textured using a back-projection technique, so the stimuli contained no useful shape cues except blur and accommodation. We used four rendering methods that vary in the validity of focus information. Two single-plane methods mimicked a conventional display and two volumetric methods mimicked natural viewing. A pinhole camera model was used in one single-plane condition, so image sharpness was independent of depth. In the other single-plane condition, natural blur was rendered thereby creating an appropriate blur gradient. In one volumetric condition, a linear blending rule was used to assign intensity to image planes. In the other volumetric condition, an optimized blending rule was used that creates a closer approximation to real-world viewing. Subject performance was at chance in the single-plane conditions. Performance improved substantially when in the volumetric conditions, slightly better in the optimized-blending condition. This is direct evidence that 3D shape judgments can be made from the information contained in blur and accommodation alone.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×