September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Unraveling simultaneous transparency and illumination changes
Author Affiliations
  • Robert Ennis
    Justus-Liebig University, Giessen, Germany
  • Katja Doerschner
    Justus-Liebig University, Giessen, Germany
Journal of Vision August 2017, Vol.17, 135. doi:https://doi.org/10.1167/17.10.135
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Ennis, Katja Doerschner; Unraveling simultaneous transparency and illumination changes. Journal of Vision 2017;17(10):135. https://doi.org/10.1167/17.10.135.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Retinally incident light is an ambiguous product of spectral distributions of light in the environment and their interactions with reflecting, absorbing, and transmitting materials. An ideal color constant observer would unravel these confounded sources of information and account for changes in each factor. We have previously shown (VSS, 2016) that when observers view the whole scene, they can disentangle simultaneous changes in the color of the illumination and the surfaces of opaque objects, although standard global scene statistics in the color constancy literature did not fully account for their behavior. Here, we have extended this investigation to simultaneous color changes in the color of the illuminant and of glass-like blobby objects (similar to Glavens (Phillips, et. al., 2016)). To simulate changes in the color of the illuminant and of transparent objects, we made a simple physically-based GPU-accelerated rendering system. Color changes were constrained to "red-green" and "blue-yellow" axes. At the beginning of the experiment, observers (n=6) first saw examples of the most extreme illuminant/transparency changes for our images. They were asked to use these references as a mental scale for illumination/transparency change (0% to 100% change). Next, they used their scale to judge the magnitude of illuminant/transparency change between pairs of images. Observers viewed sequential, random pairs of images (2s per image) with a view of the whole scene or of only the object itself (produced by masking the scene). Observers were capable of extracting simultaneous illumination/transparency changes when provided with a view of the whole scene, but were worse when viewing only the object. Global scene statistics did not fully account for their behavior in either condition. We take this as suggesting that observers make use of local changes in shadows, highlights, and caustics across different objects to determine the properties of the illuminant and the objects it illuminates.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×