September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Investigation of Lightness Illusions in Artificial Neural Networks
Author Affiliations
  • Leslie Wöhler
    Institut für Computergraphik, Technische Universität Braunschweig
  • Marcus Magnor
    Institut für Computergraphik, Technische Universität Braunschweig
Journal of Vision September 2018, Vol.18, 174. doi:https://doi.org/10.1167/18.10.174
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leslie Wöhler, Marcus Magnor; Investigation of Lightness Illusions in Artificial Neural Networks. Journal of Vision 2018;18(10):174. https://doi.org/10.1167/18.10.174.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

There has been extensive research on lightness illusions like Herman grids and Mach bands as they offer fundamental insights in human perception and the processes in the human visual system. Inspired by the data processing of the human neural network, artificial neural networks currently achieve state-of-the-art results for challenging tasks including object classification and detection. We investigated how different artificial neural networks trained to decompose albedo and illumination information of input images perceive well-known lightness illusions. In 2007, Corney et. al. constructed an artificial neural network to solve this decomposition task and found that its perception of lightness illusions was very similar to humans. To gain more insight, we created and trained a convolution neural network (CNN) on the same dataset and compared the results. Moreover, we retrained the original artificial neural network on a synthetic dataset and included comparisons for four pretrained, published CNNs, which are all designed for the same task but utilize different architectures and training data. We found significant differences between the different artificial neural networks. The original network by Corney et. al. perceived all tested illusions similar to humans. Even though we used the same training data, our CNN behaves differently and responds to less illusions. Changing the training data to an artificial dataset while keeping the same architecture as Corney et. al. also changes the networks behavior and prevents it from perceiving some of the illusions. Moreover, two of the pretrained, published CNNs were trained on the same dataset and display similar behavior towards illusions. We conclude that the architecture as well as the training data influence the perception of lightness illusions for artificial neural networks. In our experiments the CNNs were more robust to these illusions than the artificial neural networks without convolutional layers.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×