September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Distinguishing Mirror from Glass
Author Affiliations
  • Hideki Tamura
    Department of Computer Science and Engineering, Toyohashi University of TechnologyJapan Society for the Promotion of Science
  • Konrad Prokott
    Department of Psychology, Justus-Liebig-University Giessen
  • Roland Fleming
    Department of Psychology, Justus-Liebig-University Giessen
Journal of Vision September 2018, Vol.18, 227. doi:https://doi.org/10.1167/18.10.227
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hideki Tamura, Konrad Prokott, Roland Fleming; Distinguishing Mirror from Glass. Journal of Vision 2018;18(10):227. https://doi.org/10.1167/18.10.227.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Telling mirror from glass is highly challenging because both materials derive their appearance from their surroundings. Despite this, humans readily discriminate them, even when colour and luminance distributions are matched. To test how different visual cues contribute to this ability, we trained classifiers to discriminate renderings based on a range of features, and compared responses to human mirror/glass classifications on an image-by-image basis. We created over 750,000 renderings with either ideal mirror or ideal refractive materials, varying the shape, illumination and viewpoint. Then, three classifiers were defined by features based on simple pixel histograms ('Simple'), Portilla-Simoncelli texture statistics ('PS'), and three-layer convolutional neural networks ('CNN'). For randomly selected renderings, humans and all three classifiers performed well. Such high performance levels makes it hard to determine which cues the visual system uses, so to distinguish more precisely between classifiers, we selected a smaller subset of images for which the classifiers' responses were inconsistent, or consistently incorrect, forcing accuracy of the classifiers near to chance level. However, fifteen human observers judged those stimuli with 85% accuracy, suggesting humans use additional cues. The key challenge is to predict both successes and failures of human perception, so we then used Generative Adversarial Networks trained on renderings to create a new stimulus set that uniformly spanned the range from highly diagnostic to highly ambiguous. We then used Bayesian hyper-parameter search to identify CNN architectures that, when trained on standard renderings, also correlate highly with humans on these more ambiguous images. The resulting networks and images reveal many novel cues: e.g., mirrored surfaces tend to exhibit smooth, saturated colour gradients, while glass images have distinctive bright, low-saturation fringes. These insights allow us to create novel illusions.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×