Abstract
Edge integration theory proposes that the human visual system (HVS) computes lightness (i.e. achromatic color) by a two-stage process that first neurally encodes directed contrasts at luminance borders, then spatially integrates those contrasts to establish a relative reflectance scale for the surfaces in the image (Land & McCann, 1971; Reid & Shapley, 1987; Rudd & Zemach, 2004). An anchoring rule is required to transform the relative lightness scale to an absolute scale of perceived reflectance. I have proposed that this rule is to define the highest relative lightness computed by the spatial integrator as the white point (Rudd & Zemach, 2005; Rudd, 2014). The spatial integration summation algorithm used by the HVS involves summing logcontrasts (where ‘contrast’ = luminance ratio). This sums-of-log-contrasts algorithm has been extended to model quantitative lightness judgments made with displays contained both hard edges and luminance gradients by supplementing the algorithm with additional neural and perceptual principles, including contrast gain control acting between nearby contrasts, different neural gains for incremental and decremental edges (i.e. ON- and OFF-cells), edge classification, and variable spatial extent of integration (to explain individual differences) (Rudd, 2017). Here, I present visual demos to illustrate how the model can be further extended to account for various chromatic phenomena, including color assimilation and filling-in; and how the spatial summation algorithm interacts with visual image segmentation mechanisms. The demos argue against competing lightness models, including highest luminance anchoring (Land & McCan