Abstract
Color constancy has been shown to be high in nearly-natural situations. Yet, studying the cues used by human observers is very difficult when dealing with real objects and illuminants. We use Virtual Reality to construct realistic, immersive environments that are easily manipulable in real-time. We created an outdoor forest scene and an indoor office scene under five colored illuminants and selectively silenced individual cues to measure their impact on color constancy. For each trial, observers chose which of five test objects appeared most similar to an achromatic object previously shown. Objects ranged from a zero-constant tristimulus match to a perfect reflectance match and beyond (0-133% constancy). Similar to Kraft and Brainard (1999), we investigated local context, maximum flux, and mean color as cues. To eliminate local context, we placed a constant, rose-colored leaf under each test object. For maximum flux, we ensured that the brightest object in the scene remained constant across illuminations. To preserve the mean reflected light, we either shifted the reflectances of all objects or added new objects in the color direction opposite the illumination change. With all cues present, color constancy indices (CCIs) were high for both indoor and outdoor scenes (>75%). Silencing the local context and maximum flux mechanisms lowered CCIs slightly. In line with previous findings, constancy was massively impaired when keeping the average color constant by modifying object reflectances. However, when new objects were introduced instead, there was a modest reduction, with several observers showing no impairment at all. All results were consistent in both scenes. Our results show that VR can be a valuable tool for studying color constancy, and that computations underlying the gray-world mechanism do not simply operate on a pixel-by-pixel basis. Rather, observers seem to segment the scene and use the parts that are particularly diagnostic of the illumination.