Abstract
Lightness constancy is the ability to perceive surface albedo accurately despite changes in illumination. This fundamental ability is still poorly understood, and current computational models have widely differing views on what image properties guide lightness perception. Different theories propose isotropic or anisotropic surrounds, the point of highest luminance, and X-junctions as crucial image features for lightness perception. Here we adapt classification image methods to test computational models of lightness perception. Five observers viewed the argyle illusion (a strong lightness illusion that has resisted low-level explanations) in the presence of luminance noise for 10,000 trials each, and judged which of two image patches appeared lighter. We measured classification images that showed the influence of noise fluctuations at each image location on observers' judgements. We found that lightness percepts were driven by local, anisotropic regions around the patches being judged. A control experiment showed that the anisotropy tracked the stimulus orientation, and so was not a simple stimulus-independent directional bias. We ran several leading computational models of lightness perception (ODOG, high-pass filtering, anchoring theory, and framework segmentation via X junctions) in the same experiment, and found that they all failed to predict even qualitatively the images features that guide lightness perception for human observers. Our findings show that any successful computational model of lightness perception must have a role for “lighting frameworks”, i.e., regions of approximately constant illumination. We suggest how some current computational theories of lightness perception can be revised to account for our findings.