Abstract
Retinally incident light is an ambiguous product of spectral distributions of light in the environment and their interactions with reflecting, absorbing, and transmitting materials. An ideal color constant observer would unravel these confounded sources of information and account for changes in each factor. We have previously shown (VSS, 2016) that when observers view the whole scene, they can disentangle simultaneous changes in the color of the illumination and the surfaces of opaque objects, although standard global scene statistics in the color constancy literature did not fully account for their behavior. Here, we have extended this investigation to simultaneous color changes in the color of the illuminant and of glass-like blobby objects (similar to Glavens (Phillips, et. al., 2016)). To simulate changes in the color of the illuminant and of transparent objects, we made a simple physically-based GPU-accelerated rendering system. Color changes were constrained to "red-green" and "blue-yellow" axes. At the beginning of the experiment, observers (n=6) first saw examples of the most extreme illuminant/transparency changes for our images. They were asked to use these references as a mental scale for illumination/transparency change (0% to 100% change). Next, they used their scale to judge the magnitude of illuminant/transparency change between pairs of images. Observers viewed sequential, random pairs of images (2s per image) with a view of the whole scene or of only the object itself (produced by masking the scene). Observers were capable of extracting simultaneous illumination/transparency changes when provided with a view of the whole scene, but were worse when viewing only the object. Global scene statistics did not fully account for their behavior in either condition. We take this as suggesting that observers make use of local changes in shadows, highlights, and caustics across different objects to determine the properties of the illuminant and the objects it illuminates.
Meeting abstract presented at VSS 2017