September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Boundary segmentation from luminance and texture cues: Underlying mechanisms
Author Affiliations & Notes
  • Christopher DiMattina
    Florida Gulf Coast University
  • Curtis Baker
    McGill University
  • Footnotes
    Acknowledgements  Supported by Canadian NSERC Grants OPG0001978 to C.L.B
Journal of Vision September 2021, Vol.21, 1827. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher DiMattina, Curtis Baker; Boundary segmentation from luminance and texture cues: Underlying mechanisms. Journal of Vision 2021;21(9):1827. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Segmenting the visual scene into distinct surfaces is one of the most basic aspects of visual perception. In natural scenes, adjacent surfaces often differ in mean luminance, which provides an important boundary segmentation cue. However, mean luminance differences between two surfaces may occur without any sharp change in albedo at their boundary, but instead arise from differences in the proportion of small light and dark texture elements within each surface. Here we investigate the performance of human observers segmenting such “luminance texture boundaries”. Luminance texture boundaries were synthesized by placing different proportions of white and black Gaussian micropatterns on opposite sides of a boundary whose orientation was left-oblique (-45 deg. w.r.t. vertical) or right-oblique (+45 deg.), and observers identified the boundary orientation in a 2AFC psychophysical task. We demonstrate that a model based on a simple luminance difference computation cannot explain observers' boundary segmentation performance. However, extending this one-stage model by adding contrast normalization successfully accounts for these data. By performing further experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain psychophysical performance. However a Filter-Rectify-Filter (FRF) model, positing two cascaded stages of filtering, fits this data very well, and furthermore can account for observers' ability to segment luminance texture boundary stimuli, both in the presence as well as absence of interfering (masking) luminance step boundaries. We propose that such multi-stage luminance difference computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.