Abstract
Purpose: Surface reflectance and illumination are confounded at any single location in the retinal image. Because of this, achieving lightness or color constancy requires that the visual system integrate information from multiple image regions. We report research designed to provide direct measurement of how such integration depends on the spatial structure of the image.
Method: Four observers participated. On each trial, observers judged whether a grayscale test patch appeared lighter or darker than a comparison patch. Both patches were presented simultaneously, and both were embedded in a grayscale surround consisting of 24 distinct contextual patches. A staircase procedure was used to keep the test patches at close to the point of subjective equality. The contextual patch luminances surrounding the test patch were perturbed randomly on a trial-by-trial basis; those surrounding the comparison patch were fixed. The data were analyzed using linear classification image methods, to determine the weight with which each contextual patch contributed to the relative lightness judgment. The weight for each patch was taken as the difference between its mean luminance on trials where the test patch was judged lighter and its mean luminance on trials where the test patch was judged darker. Between sessions we varied the shapes of the contextual patches surrounding the test patches, as in lightness illusions reported by Adelson (Science, 1993). Across these shape manipulations, the photometric properties of all contextual patches were held constant.
Results: Not surprisingly, the weights for contextual patches were highest for the four patches that shared an edge with the test patch (mean value = -0.09), compared to all other patches (mean value = -0.01). Of more interest, the weights associated with a particular contextual patch varied with the shape manipulation-weights for contextual patches perceived as coplanar with the test patch tended to be greatest. To document this change, we calculated a simple index: for the four patches immediately adjacent to the test patch, the sum of the left and right patches was subtracted from the sum of the top and bottom patches. The geometric manipulation changed this index from −0.04 to 0.08 (p<.05).
Conclusion: The classification image technique allows detailed measurement of how the visual system integrates image regions in the perception of lightness. Our data show explicitly how this integration varies with image geometry when photometric factors are held fixed, within the linear framework of our current analysis.