Purchase this article with an account.
Alban C Flachot, Heiko H Schuett, Roland W Fleming, Felix Wichmann, Karl R Gegenfurtner; Color Constancy in Deep Neural Networks. Journal of Vision 2019;19(10):298. doi: https://doi.org/10.1167/19.10.298.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Color constancy contributes to our visual system’s ability to recognize objects. Here, we explored whether and how Deep Neural Networks can learn to identify the colours of objects across varying illuminations. We devised a 6-layer feedforward network (3 convolutional layers, 2 fully connected layers, one classification layer). The network was trained to classify the reflectances of objects. Stimuli consisted of the cone absorptions in rendered images of 3D objects, generated using 2115 different 3D-models, the reflectancies of 330 different Munsell chips, 265 different natural illuminations. One model, Deep65, was trained under a fixed daylight D65 illumination, while DeepCC, was trained under varying illuminations. Both networks were capable of learning the task, reaching 69% and 82% accuracy for DeepCC and Deep65 respectively on their validation sets (chance performance is 0.3%). In cross validation, however, Deep65 fails when tested on inputs with varying illuminations. This is the case even when chromatic noise is added during training, mimicking some of the effects of the varying illumination. DeepCC, on the other hand, performs at 73% when tested on a fixed D65 illumination. Importantly, color categorization errors were systematic, reflecting distances in color space. We then removed some cues for color constancy from the input images. DeepCC was slightly affected when hiding a panel of colorful patches, which had constant reflectance across all input images. Removing the complete image background deteriorated performance to nearly the level of Deep65. A multidimensional scaling analysis of both networks showed that they represent Munsell space quite accurately, but more robustly in DeepCC. Our results show that DNNs can be trained on color constancy, and that they use similar cues as observed in humans (e.g., Kraft & Brainard, PNAS 1999). Our approach allows to quickly test the effect of image manipulations on constancy performance.
This PDF is available to Subscribers Only