Abstract
Human lightness percepts are often remarkably stable despite variations in illumination and spatial context. But human lightness constancy is not perfect (Gilchrist et al., 1999). Many failures of lightness constancy result from an undue influence of spatial context, especially nearby spatial context (e.g., simultaneous contrast). To better understand the circumstances in which lightness constancy either holds or breaks down, a model of the effects of spatial context on lightness perception is desirable. Towards this goal, past studies from our lab have revisited classic psychophysical paradigms in which the lightness of a test disk is studied as a function of changes in the luminance of one or more surround rings (Rudd, 2001; Rudd & Arrington, 2001; Rudd & Zemach, 2002, in press; Zemach & Rudd, 2002). The results of all of these studies have been shown to be accounted for by a single lightness matching equation, which has a natural interpretation as a calculus that governs edge integration in an underlying neural lightness computation. Our past work has demonstrated that the parameters of this equation are controlled by the distances between edges in the scene and by the edge contrast polarities. Here we present and computer-simulate a theory of the neural processes that give rise to the lightness matching equation. The theory assumes that lightness is encoded by neurons that spatially pool the outputs of edge-detecting neurons whose receptive fields are oriented spatial filters. The weights assigned to the outputs of individual edge-detecting neurons in the pooling process are modulated by the presence, contrast, and contrast polarity of other edges in the scene via inhibitory interactions between edge-detecting neurons having the same orientation preference. A notable property of the theory is that it accounts for our psychophysical data without assuming any dynamic filling-in of induction signals from borders.