September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Seeing through transparent layers
Author Affiliations
  • Dicle Dovencioglu
    Justus Liebig University of Giessen (JLU Giessen), Department of General Psychology, Giessen, Germany
  • Andrea van Doorn
    University of Leuven (KU Leuven), Laboratory of Experimental Psychology, Leuven, Belgium
    Utrecht University, Experimental Psychology, Utrecht, The Netherlands
  • Jan Koenderink
    University of Leuven (KU Leuven), Laboratory of Experimental Psychology, Leuven, Belgium
    Utrecht University, Experimental Psychology, Utrecht, The Netherlands
  • Katja Doerschner
    Justus Liebig University of Giessen (JLU Giessen), Department of General Psychology, Giessen, Germany
Journal of Vision August 2017, Vol.17, 321. doi:https://doi.org/10.1167/17.10.321
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dicle Dovencioglu, Andrea van Doorn, Jan Koenderink, Katja Doerschner; Seeing through transparent layers. Journal of Vision 2017;17(10):321. https://doi.org/10.1167/17.10.321.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans are good at estimating the causal changes in the visual information by perceptually dividing complex visual scenes into multiple layers, this is also true when objects are viewed through a transparent layer. For example, we can effectively drive through heavy fog or hard rain; or decide whether an object in a river is animate while fishing. In such complex scenes, changes in visual information might be due to observer motion, object motion, deformations of the transparent medium, or a combination of these. Recent research has shown that image deformations can provide information to attribute various properties to transparent layers, such as their refractive index, thickness, or transparency. However, different transparent mediums can cause similar amounts of refraction or they can be rated similarly translucent while one being more foggy. Despite our rich lexicon to describe the nature of a transparent layer, the optical and geometrical properties that identify each transparent layer class remains to be discovered. Here, we use eidolons to estimate equivalence classes for perceptually similar transparent layers. Specifically, we ask whether we could describe the specific image deformations that are interpreted as transparency in terms of the parameters of the Eidolon Factory (reach, grain, coherence; https://github.com/gestaltrevision/Eidolon). To create a stimulus space for the eidolons of a fiducial image, while keeping the coherence fixed at 1, we varied the reach and grain levels to systematically increase the amount of local disarray in an image. We asked participants (n = 11) to adjust the reach and grain values simultaneously so that the object in the scene looked like it is under water. Our results suggest that eidolons with higher grain values (g > 8) are in a perceptually equivalent class and these eidolons give an under water impression, probably due to the wave-like large local disarray.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×