Abstract
We use color to select objects and decide how to interact with them. How color supports cross-illumination object selection, however, is not well understood. This is particularly true for naturalistic tasks. In our experiments, subjects performed a naturalistic task that relied on color. On each trial, the subjects saw three simulated scenes: the model, the source, and the workspace. The model contained four target blocks that differed in reflectance; their arrangement varied across trials. The source contained eight blocks — one pair of competitors for each target block, whose degree of color similarity relative to the target varied across trials. The subjects' task was to recreate the model arrangement of blocks in the workspace by selecting the blocks from the source. The model scene was rendered under a standard illuminant, while the source and workspace were rendered under a test illuminant. As the subjects copied the model, they selected blocks across an illuminant change. By analyzing subjects' choices across a series of pairwise combinations of competitors, we inferred a selection-based match for each target and illuminant change using a variant of maximum likelihood difference scaling. We then used these selection-based matches to compute selection-based color constancy indices (CCI), which could range from 0 (no constancy) to 1 (perfect constancy). Overall, constancy was fairly good in our naturalistic task (mean CCI = 0.46), but varied considerably across subjects (0.22–0.86). Our results show that the visual system's mechanisms of color constancy serve to support veridical object selection across changes in illumination.