Abstract
Current approaches to lightness perception include low-level and mid-level models. Low-level models are computational, but have no representation of important factors such as lighting conditions. Mid-level models incorporate such factors, but are typically conceptual rather than genuinely computational, and this limits both their usefulness and our ability to derive testable predictions from them. Here I use Markov random field (MRF) methods to develop a computational mid-level model of lightness perception. The model makes simple statistical assumptions about local patterns of lighting and reflectance, and uses belief propagation and simulated annealing to find globally maximum a posteriori estimates of lighting and reflectance in stimulus images. To simplify this first implementation, I model lightness perception in stimuli on a 16 x 16 pixel grid; within this constraint one can recreate many lightness illusions (e.g., the argyle illusion) and many lightness phenomena (e.g., simultaneous contrast). The model assumes that (1) reflectance spans the range 3% to 90%, (2) illuminance (incident lighting) spans 0 to 100,000 lux, (3) illuminance edges are less common than reflectance edges, (4) illuminance edges tend to be straighter than reflectance edges, and (5) reflectance and illuminance edges usually occur at image luminance edges. Guided by these few simple assumptions, the model arrives at human-like interpretations of lightness illusions that have been problematic for previous models, including the argyle illusion, snake illusion, Koffka ring, and their control conditions. The model also reproduces important phenomena in human lightness perception, including simultaneous contrast and anchoring to white. Thus an MRF model that incorporates simple assumptions about reflectance and lighting provides a strong mid-level computational account of lightness perception over a wide range of conditions. It also illustrates how MRFs can be used to develop more powerful models of constancy that incorporate factors such colour and 3D shape.
Meeting abstract presented at VSS 2018