Abstract
Lightness perception is the ability to perceive black, white, and gray surface colors in a wide range of lighting conditions and contexts. This is a fundamental ability, but how the human visual system computes lightness is not well understood. Here I show that several key phenomena in lightness perception can be explained by a computational, probabilistic graphical model that makes a few simple assumptions about local patterns of lighting and reflectance, and infers globally optimal interpretations of stimulus images using belief propagation. I call the proposed model MIR, for ‘Markov illuminance and reflectance’. To simplify the modelling problem, I consider stimuli on a 16 x 16 pixel grid. Within this constraint one can create many challenging lightness phenomena. To provide a basis for model testing, I measure human lightness percepts in several strong, well known lightness illusions adapted for a 16 x 16 grid. MIR’s probabilistic assumptions are reasonable and generic, including for example that lighting intensity spans a much wider range than surface reflectance, and that shadow boundaries tend to be straighter than reflectance edges. Like human observers, MIR exhibits lightness constancy, codetermination, contrast, glow, and articulation effects. It also arrives at human-like interpretations of strong lightness illusions that have challenged previous models. I compare three current brightness models to MIR: ODOG, a high-pass model, and a retinex model. MIR outperforms these models at predicting human lightness judgments, with the exception that ODOG (unlike MIR) can account for brightness assimilation effects. Thus a probabilistic model based on simple assumptions about lighting and reflectance gives a good computational account of lightness perception over a wide range of conditions. This work also shows how graphical models can be extended to develop more powerful models of constancy that incorporate features such color and depth.