Abstract
Color constancy is defined as the act of accurately extracting the color of a given surface under arbitrary changes of illumination. However, this also requires that one compensates for changes in surface reflectance properties, such as those due to environmental factors. We investigated the visual system's capacity to separate changes in surface reflectance from simultaneous changes in the illumination. Hyperspectral images were taken of tennis-table balls in a white chamber that was diffusely illuminated by a broadband light source. A blue ball and a yellow ball were photographed under a bluish daylight and a yellowish daylight illuminant, giving four images. Variations of reflectance and illumination were simulated by linearly interpolating between the recorded spectra of these four images. We had two tasks, during which 4 observers viewed sequential, random pairs of these images (2s presentation per image). In the first task, they stated whether the ball had changed, the illuminant had changed, or both had changed. In the second task, observers first saw examples of the most extreme illuminant and reflectance changes for our images. They were asked to memorize these references as a scale for illumination and reflectance change, in units of 0% to 100% change. Next, they were asked to use their memorized scale to judge the magnitude of illuminant and reflectance change between sequentially displayed pairs of images. On average, observers accurately identified 93% of the reflectance-only and 83% of the illumination-only changes, but they were worse at identifying combined reflectance/illumination changes, with 55% accuracy on average. Observers were consistent in their magnitude reports for each change, but they showed a tendency to overestimate intermediate changes and underestimate maximal changes. We suggest that, at least for our stimuli, the visual system is poor at handling the ambiguities introduced by simultaneous changes in reflectance and illumination.
Meeting abstract presented at VSS 2016