September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Color Constancy in Deep Neural Networks
Author Affiliations & Notes
  • Alban C Flachot
    Departement of Experimental Psychologie, Giessen University
  • Heiko H Schuett
    Center for Neural Science, New York University
  • Roland W Fleming
    Departement of Experimental Psychologie, Giessen University
  • Felix Wichmann
    Neural Information Processing, Tuebingen University
  • Karl R Gegenfurtner
    Departement of Experimental Psychologie, Giessen University
Journal of Vision September 2019, Vol.19, 298. doi:https://doi.org/10.1167/19.10.298
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alban C Flachot, Heiko H Schuett, Roland W Fleming, Felix Wichmann, Karl R Gegenfurtner; Color Constancy in Deep Neural Networks. Journal of Vision 2019;19(10):298. https://doi.org/10.1167/19.10.298.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Color constancy contributes to our visual system’s ability to recognize objects. Here, we explored whether and how Deep Neural Networks can learn to identify the colours of objects across varying illuminations. We devised a 6-layer feedforward network (3 convolutional layers, 2 fully connected layers, one classification layer). The network was trained to classify the reflectances of objects. Stimuli consisted of the cone absorptions in rendered images of 3D objects, generated using 2115 different 3D-models, the reflectancies of 330 different Munsell chips, 265 different natural illuminations. One model, Deep65, was trained under a fixed daylight D65 illumination, while DeepCC, was trained under varying illuminations. Both networks were capable of learning the task, reaching 69% and 82% accuracy for DeepCC and Deep65 respectively on their validation sets (chance performance is 0.3%). In cross validation, however, Deep65 fails when tested on inputs with varying illuminations. This is the case even when chromatic noise is added during training, mimicking some of the effects of the varying illumination. DeepCC, on the other hand, performs at 73% when tested on a fixed D65 illumination. Importantly, color categorization errors were systematic, reflecting distances in color space. We then removed some cues for color constancy from the input images. DeepCC was slightly affected when hiding a panel of colorful patches, which had constant reflectance across all input images. Removing the complete image background deteriorated performance to nearly the level of Deep65. A multidimensional scaling analysis of both networks showed that they represent Munsell space quite accurately, but more robustly in DeepCC. Our results show that DNNs can be trained on color constancy, and that they use similar cues as observed in humans (e.g., Kraft & Brainard, PNAS 1999). Our approach allows to quickly test the effect of image manipulations on constancy performance.

Acknowledgement: Deutsche Forschungsgemeinschaft Cardinal Mechanism of Perception, SFB/TRR-135 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×