Abstract
It is commonly believed that lightness computation occurs in a series of stages that involve: 1) extraction of local border contrast or luminance ratios; 2) edge integration to combine contrast or luminance ratios across space; and 3) lightness anchoring to relate the relative lightness scale computed in Stage 2 to the scale of reflectances in the image. The results of a number of psychophysical experiments have been interpreted as supporting the highest luminance anchoring rule, which states that the highest luminance in the scene appears white. There is a fundamental problem with this scheme for computing lightness that has not been previously addressed: The last stage of lightness computation: anchoring, has no direct access to luminance information, which is lost after Stage 1; instead it only knows about the output of the edge integration stage. So how can it anchor to the highest luminance? We have previously proposed a quantitative model of edge integration based on the idea that underlying lightness and darkness induction signals fill in from borders and combine to create an achromatic color signal (Rudd, 2001; Rudd & Arrington, 2001; Rudd & Zemach, 2002). Our model predicts that two or more regions within a scene can have the same highest luminance yet appear different depending on their spatial context, whereas the highest luminance rule predicts that these regions should appear equally white. We tested these competing hypotheses by having subjects match the lightnesses of two disks, each surrounded by one or more dimmer rings of varying luminance, presented side-by-side on a flat-panel monitor. Our results demonstrate that two regions can have the same highest luminance yet be seen as having different lightnesses depending on their surrounds, consistent with the model. And conversely, we devise displays such that, in order for two regions to appear equally white their actual luminances must differ. Thus the highest luminance is not always seen as white.