Abstract
The task of computing lightness (i.e., perceived surface reflectance) from the spatial distribution of luminances in the retinal image is an underdetermined problem because the causal effects of reflectance and illumination are confounded in the image. Some recent approaches to lightness computation combine Bayesian priors with empirical estimates of the illuminant to compute reflectance from retinal luminance. Here, I argue for a different sort of Bayesian computation that takes local signed contrast (roughly, “edges”) as its input. Sensory edge information is combined with Bayesian priors that instantiate assumptions about the illumination and other rules such as grouping by proximity. The model incorporates a number of mechanisms from the lightness literature, including edge integration, anchoring, illumination frameworks, and contrast gain control. None of these mechanisms is gratuitous-all are required to account for data. I demonstrate how the model works by applying it to the results of lightness matching studies involving simple stimuli. Failures of lightness constancy are quantitatively accounted for by misapplying priors that probably favor lightness constancy in natural environments. Assimilation and contrast occur as byproducts. The rules that adjust the priors must necessarily be applied in a particular order, suggesting an underlying neural computation that first weighs the importance of local edge data according to the observer's assumptions about illumination, then updates these weights on the basis of the spatial organization of the stimulus, then spatially integrates weighted contrasts prior to a final anchoring stage. The order of operations is consistent with the idea that top-down attentional feedback sets the gains of early cortical contrast detectors in visual areas V1 or V2, then higher-level visual circuits having larger receptive fields further adjust these gains in light of the wider spatial image context. The spatial extent of perceptual edge integration suggests that lightness is represented in or beyond area V4.