December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Pixel-wise color constancy in a Deep Neural Network
Author Affiliations
  • Hamed Heidari-Gorji
    Department of Experimental Psychology, Giessen University, Germany
  • Karl R. Gegenfurtner
    Department of Experimental Psychology, Giessen University, Germany
Journal of Vision December 2022, Vol.22, 4235. doi:https://doi.org/10.1167/jov.22.14.4235
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hamed Heidari-Gorji, Karl R. Gegenfurtner; Pixel-wise color constancy in a Deep Neural Network. Journal of Vision 2022;22(14):4235. https://doi.org/10.1167/jov.22.14.4235.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Color constancy has been studied intensely during the past few decades in humans and machines, but evaluation and comparisons are difficult due to a lack of large data sets with hyperspectral ground truth. We used the spectral renderer Mitsuba to render a training data set of more than 100,000 images in various illuminations, utilizing over 1700 3D mesh objects, 1270 reflectance functions of Munsell chips, and three materials (diffuse, plastic, metal). Each image contained multiple objects with varying colors and materials. The validation dataset was generated using the same illuminations as the original dataset, as well as some additional ones. 330 different reflectance functions (Munsell chips from the World Color Survey) and new objects were used in the validation dataset. We modified a convolutional U-Net model to represent the hue, brightness, and chroma of each pixel in the output. One naïve model was trained with D65 illumination only, while the color constancy model (U-NET-CC) was trained with the full range of illuminations. The U-NET-CC model did a remarkable job in recovering surface reflectance at each pixel. The average absolute error for hue was 2.8 of 80 Munsell hue units, for value it was 0.85 of 9.5, and for chroma 1.3 of 16, respectively. Substantial errors occurred only for the brightest and darkest surfaces, for which humans also find it quite difficult to estimate the hue. Error increased with increasing size of the illumination change, but we did not find a difference between naturally occurring illuminants along the daylight axes and artificial ones orthogonal to that axis. Overall, our U-NET model is capable of assigning a constant reflectance to objects, independent of shading, and it is capable of discounting the illuminant.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×