Abstract
A fundamental challenge for visual perception is to extract stable representations of the three-dimensional physical world from the retinal images. The challenge arises because the proximal retinal images are ambiguous about underlying distal scene properties. As but one of many examples, the spectrum of the light reflected from an object and imaged on the retina depends both on the object's intrinsic surface reflectance and on the spectral power distribution of the illuminant, so that the reflected spectrum does not uniquely indicate the object's reflectance.
For perceived color to be a reliable indicator of object surface reflectance, visual processing must stabilize it against variation in object-extrinsic factors such as the illuminant. To the extent that a visual system accomplishes such stabilization, we say that it is color constant (or lightness constant in the special case where the stimuli are restricted to be achromatic.)
In this talk, I will review psychophysical measurements of human color constancy, with an emphasis on experiments that explicitly introduce surface-illuminant ambiguity into the laboratory stimulus ensemble. I will then connect these measurements to a computational model of how an ideal visual system would achieve color constancy for the same stimulus ensemble, and argue that the successes and failures human constancy match those of the ideal system. Finally, as a step towards elucidating the neural mechanisms underlying color and lightness constancy, I will describe recent physiological measurements that probe how populations of neurons in primate visual cortex represent surface lightness across changes in illumination.
Friday, October 13, 4:00–5:30 PM