Abstract
PURPOSE: A single location in the retinal image does not provide enough information for the visual system to accurately recover surface reflectance-to do so it is necessary to combine information across image locations. At the same time, such integration should not extend across illumination boundaries, or estimation performance will degrade. The present experiments study the ability of observers to identify changes of illumination on the basis of photometric information.
METHOD: On each trial, two side-by-side 5x5 checkerboards were presented. Both were simulations of grayscale surfaces with randomly drawn reflectances. For one checkerboard, the illumination was spatially uniform. For the other there were two illumination intensities, with between one and five signal patches simulated under an illuminant more intense than that used for the rest. The subject's task was to indicate which checkerboard had spatially non-uniform illumination. The second illuminant's intensity was varied to determine discrimination threshold. In one condition, the locations of the patches simulated under the second illuminant were fixed and known to the subject. In a second condition, these locations were chosen randomly on every trial. Performance was compared to that of an ideal observer.
RESULTS: Increasing the number of signal patches improved performance in both the location known and the location randomized conditions. Knowledge of location improved performance. The overall pattern of results is predicted by the ideal observer, but efficiency was higher for the location known condition.
CONCLUSION: Subjects are able to integrate information across image locations to segment illuminants.