Abstract
The highest luminance anchoring rule asserts that the highest luminance in an image appears white and the lightnesses of other image regions are computed relative to the white point. (Wallach, 1948, 1963; Gilchrist et al., 1999). We recently presented a model of lightness computation based on the principle of distance-dependent edge integration (Rudd & Zemach, 2004) and showed that our model predicts contrast induction effects for incremental targets, a prediction that violates the highest luminance rule (Rudd & Zemach, VSS 2003; submitted). Consistent with our model and contrary to the highest luminance rule, contrast effects were observed when subjects were instructed to match increments in appearance. It is not clear, however, whether our subjects were judging lightness or some other dimension of achromatic color. Here we repeat the experiment with instructional variations. Observers were instructed to match incremental targets in either brightness (perceived luminance), brightness contrast (perceived contrast), or lightness (perceived reflectance). Two different lightness conditions were run. In the first, observers were instructed to imagine that changes in the luminance of the test surround were due to changes in the surround reflectance. In the second, the observers were instructed to imagine that the same luminance changes were due to changes in the illumination falling on the test and its surround. The latter instruction produced large contrast effects that strongly violate the highest luminance rule for lightness. Brightness matches and lightness matches made under the reflectance change instructions produced assimilation effects at high test ring luminances that violate both the highest luminance rule and the distance-dependent edge integration model. The results from all four matching conditions can be accounted for by a modified edge integration model in which the weights given to edges in a neural lightness computation are controlled dynamically by top-down influences.