Abstract
Color constancy is the ability to perceive an object as having a constant color appearance despite changes in illumination and surrounding objects. Measurements of human color constancy have identified both conditions where it is very good and conditions where it fails. This is not surprising: the spectrum of the light reaching the eye confounds an object's surface reflectance spectrum with the spectral power distribution of the illuminant. Consistent with this, computational algorithms designed to achieve constancy succeed only for a restricted set of scenes. Not yet identified is an algorithm that provides a good description of human performance. Recently, Brainard & Freeman (JOSA A, 14, 1393–1411) developed a Bayesian algorithm for constancy that optimally uses the information about the illuminant carried by the color statistics of an image. Here we compare predictions derived from that algorithm to measurements of human color constancy. Observers adjusted a test patch until it appeared achromatic. Achromatic loci were obtained for 17 different experimental scenes. The scenes consisted of various illuminated objects placed within a small chamber. Both the illuminant and the scene objects were varied. The data have been reported previously (e.g. Kraft & Brainard, PNAS, 96, 307–312). For each scene, a calibrated hyperspectral image (31 monochromatic image planes, 400–700 nm) was acquired and used to compute L-, M-, and S-cone quantal absorption rates. These rates were input to the Brainard/Freeman algorithm. We compared the chromaticities of the algorithm's illuminant estimates with the chromaticities of the measured achromatic loci. These are in good agreement. The agreement holds both for scenes where the chromaticity of the achromatic locus lies near to the chromaticity of the physical illuminant (good constancy) and for scenes where it does not (poor constancy).