Abstract
The neural mechanisms underlying color constancy remain poorly understood. It is generally accepted that the key operation underlying color constancy is the estimation of the scene illuminant(s), but as illuminant estimation has proven to be a very difficult computational problem, color constancy algorithms continue to fall far short of human performance. Many existing color constancy algorithms attempt to determine the illuminant based solely on the information contained within the scene. We have developed a Bayesian-inspired approach to color constancy involving the following five heuristics: (1) the most probable illuminant in a scene lies along the line (in color space) that connects the scene average M and the a priori most probable illuminant L; (2) the scene average M is computed using a single chromaticity estimate per object surface (i.e. one surface one vote), rather than a single estimate per image pixel; (3) object surface contributions to the scene average M are weighted by surface brightness, reflecting the well known brightest-is-illuminantest principle; (4) scenes with high surface counts, which contain relatively more illuminant information, pull illuminant estimates radially outward towards M; and (5) scenes with large color gamuts pull the illuminant estimate radially inward towards L. To test our algorithm, we developed (1) a database methodology in which we produce large numbers of color image composites with varying backgrounds constructed from the images of 100 real objects shot under 5 different illuminants, and (2) a color-histrogramming recognition benchmark which we have used to test our approach to color constancy against several other published algorithms. In our most recent tests, our algorithm achieved 67.4% recognition rates compared to 37.4% for unprocessed images, outperforming all other algorithms tested.
This work was supported by ONR, ARO, DARPA, and NIH.