December 2017
Volume 17, Issue 15
Open Access
OSA Fall Vision Meeting Abstract  |   December 2017
Edge integration theory of lightness and color perception illustrated by visual demos
Author Affiliations
  • Michael Rudd
    Department of Physiology and Biophysics, University of Washington
Journal of Vision December 2017, Vol.17, 48-49. doi:https://doi.org/10.1167/17.15.48a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Rudd; Edge integration theory of lightness and color perception illustrated by visual demos. Journal of Vision 2017;17(15):48-49. https://doi.org/10.1167/17.15.48a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Edge integration theory proposes that the human visual system (HVS) computes lightness (i.e. achromatic color) by a two-stage process that first neurally encodes directed contrasts at luminance borders, then spatially integrates those contrasts to establish a relative reflectance scale for the surfaces in the image (Land & McCann, 1971; Reid & Shapley, 1987; Rudd & Zemach, 2004). An anchoring rule is required to transform the relative lightness scale to an absolute scale of perceived reflectance. I have proposed that this rule is to define the highest relative lightness computed by the spatial integrator as the white point (Rudd & Zemach, 2005; Rudd, 2014). The spatial integration summation algorithm used by the HVS involves summing logcontrasts (where ‘contrast’ = luminance ratio). This sums-of-log-contrasts algorithm has been extended to model quantitative lightness judgments made with displays contained both hard edges and luminance gradients by supplementing the algorithm with additional neural and perceptual principles, including contrast gain control acting between nearby contrasts, different neural gains for incremental and decremental edges (i.e. ON- and OFF-cells), edge classification, and variable spatial extent of integration (to explain individual differences) (Rudd, 2017). Here, I present visual demos to illustrate how the model can be further extended to account for various chromatic phenomena, including color assimilation and filling-in; and how the spatial summation algorithm interacts with visual image segmentation mechanisms. The demos argue against competing lightness models, including highest luminance anchoring (Land & McCan

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×