August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Local Estimation of Global Surface Orientation from Texture and Disparity
Author Affiliations
  • Wilson Geisler
    University of Texas at Austin
Journal of Vision September 2016, Vol.16, 196. doi:https://doi.org/10.1167/16.12.196
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wilson Geisler; Local Estimation of Global Surface Orientation from Texture and Disparity. Journal of Vision 2016;16(12):196. https://doi.org/10.1167/16.12.196.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual system has a remarkable ability to encode the geometry of the 3D environment from the pair of 2D images captured by the eyes. It does so by making measurements of local image properties, including texture and binocular disparity, and then combining those measurements across the visual field into a coherent 3D representation. Although much is known about how the visual system encodes local texture and disparity, less is known about how those properties are combined across space, especially in the creation of 3D representations. This presentation will describe how to optimally estimate the orientation and distance of locally-planar surfaces from texture cues and from disparity cues so that a coherent global 3D representation is (in effect) created automatically. The approach is based on exact closed-form expressions for the coordinate transformations between image patches within an eye and across the eyes, given a locally-planar surface of arbitrary slant, tilt and distance. In this framework, slant, tilt and distance are specified in a global coordinate frame aligned with the optic axis of the eye(s). It turns out that these globally-defined surface properties can be estimated at any image point by local image matching, potentially simplifying 3D perceptual grouping. For example, in binocular matching, all image points that are in the ground plane will return the same slant, tilt and distance (in the global coordinate frame). Thus, the grouping of image locations that belong to the ground plane becomes as simple as grouping image locations that have a similar color. The same result holds for texture, except only globally-defined slant and tilt are returned. The efficacy of the approach is demonstrated in simulations. Perhaps the brain simplifies the circuitry needed for 3D perceptual grouping by making local measurements in a global coordinate system.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×