Abstract
We propose a computational model that can account for a large number of lightness illusions, including the seemingly opposing effects of brightness contrast and assimilation. The underlying mathematics is based on the new theory of compressive sensing, which provides an efficient method for sampling and reconstructing a signal that is sparse or compressible. The model states that at the retina the intensity signal is compressed. This process amounts to a random sampling of locally averaged values. In the cortex the intensity values are reconstructed using as input the compressed signal, and combined with the edges. Reconstruction amounts to solving an underdetermined linear equation system using L1 norm minimization. Assuming that the intensity signal is sparse in the Fourier domain, the reconstructed signal, which is a linear combination of a small number of Fourier components, deviates from the original signal. The reconstruction error is consistent with the perception of many well known lightness illusions, including the contrast and the assimilation effect, the articulated enhanced brightness contrast, the checker shadow illusion, and the grating induction. Considering in addition, the space-variant resolution of the human eye, the model also explains illusory patterns with changes in perceived lightness over large ranges, such as the Cornsweet and related illusions. We conducted experiments with new variations of the White and the Dungeon illusion, whose perception changes with the resolution at which the different parts of the patterns appear on the eye, and found that the model predicted well the perception in these stimuli.