August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Statistics of edge profiles in natural scenes
Author Affiliations
  • Kedarnath Vilankar
    School of Electrical and Computer Engineering, Oklahoma State University
  • James Golden
    Department of Psychology, Cornell University
  • Damon Chandler
    School of Electrical and Computer Engineering, Oklahoma State University
  • David Field
    Department of Psychology, Cornell University
Journal of Vision August 2012, Vol.12, 849. doi:https://doi.org/10.1167/12.9.849
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kedarnath Vilankar, James Golden, Damon Chandler, David Field; Statistics of edge profiles in natural scenes. Journal of Vision 2012;12(9):849. https://doi.org/10.1167/12.9.849.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is widely known that edges in natural scenes are formed by both luminance and texture differences between two objects. However, little effort has focused on studying the statistical properties of such edges. Computing these statistics could provide important insights into how the visual system processes natural scenes.

Ten high-resolution natural scenes were selected from the McGill Color Image Database. Three human subjects traced the edges of occlusion boundaries on grayscale versions of the images. Patches of size 80x40 pixels, centered on the marked edges, were extracted for analysis. The 5000 extracted edge patches were then aligned in terms of polarity (brighter side on top).

We analyzed the edges in both linear and log luminance domains. First-order statistics revealed that the mean edge is a blurred step in luminance with greater variance and less skewness in the brighter half than in the darker half. The distribution of Michelson contrast between the brighter and darker halves is uniform with a bias towards low contrast. We also classified the edge patches into four categories: (1) Luminance-defined edges, which have high contrast and small standard deviation. (2) Textured-defined edges, which have low contrast and high standard deviation. (3) Luminance-textured edges, which have high contrast and large standard deviation. (4) Object-defined edges, which exhibit neither a difference in luminance nor a difference in texture between the two halves; these edges likely contain boundaries that subjects marked via interpolation/extrapolation based on object recognition. Approximately 40% of the edges were luminance-defined, 10% of the edges were texture-defined, 32% of the edges were luminance-textured edges, and 18% of the edges were object-defined. We discuss the implications of these findings for neural and computational coding. In particular, edge detectors and various wavelets have been tuned to detect luminance-defined edges; such templates would fail on 30% of occlusion boundaries.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×