**Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.**

*u*′

*v*′

*Y*color coordinates to calculate the saturation in an equal-color space. The

*u*′

*v*′ saturation

*s*and hue

_{uv}*H*were defined as follows: where

_{uv}*u*′

*v*′ chromaticity coordinates of the pixel at position (

*i, j*) . The

*u*′

*v*′ chromaticity coordinates of the white point (

*u*′

*,*

_{n}*v*′

*) were (0.2032, 0.4963), which corresponds to (0.3465, 0.3760) in the*

_{n}*xy*chromaticity coordinates. The white point was measured using the white checker in the Macbeth Color Checker positioned under the current lighting condition.

*u*′

*v*′ chromaticity coordinates is separately shown for dry and wet surfaces. The white circle indicates the white point, and the distance between each dot and the white point indicates how saturated the color of each dot is. The green line in each figure indicates the distance averaged across all materials. The mean saturation across all materials in the wet condition is 1.43 times larger than in the dry condition.

*SD*] and kurtosis) are not affected by wetting in a systematic way. These results are consistent with the theoretical analysis described in the previous section.

^{2}. The

*xy*coordinates of the white point were (0.3331, 0.3615). A participant viewed the stimuli in a dark room at a viewing distance of 57 cm.

*Y*(which had been normalized in the range from 0 to 1) into

_{original}*Y*as follows: where (

_{wet}*i, j*) indicates the pixel position. The coordinates of white (

*u*′

*,*

_{n}*v*′

*) for calculating the saturation*

_{n}*s*in Equation 1 were (0.1998, 0.4874). To increase the luminance skew, we set

_{uv}*gamma*= 3. To enhance the saturation of the texture, we added 0.26 to the

*s*of each pixel. In deciding these parameters of the WET transformation, we preliminary explored a range of parameters and chose ones that did not disturb the apparent naturalness of all the images used in Experiment 1a.

_{uv}*F*(1, 12) = 71.384,

*p*< 0.0001). The results show that the perceived wetness was on average much stronger for the transformed images than for the original ones (Figure 7b). In this experiment, the wet rating score of the transformed images was not very high. One reason may be that we used “mild” WET parameters to warrant the naturalness of all the transformed images. Another reason may be that the WET operator does not create the water drops or puddles included in the reference images for the highest score.

*s*= 0) was also used. In the experiment, a surface image taken with a standard digital camera (Nikon D5100) was used (Figure 8).

_{uv}*F*(6, 66) = 28.504,

*p*< 0.0001;

*F*(14, 154) = 35.741,

*p*< 0.0001; and

*F*(84, 924) = 2.3671,

*p*< 0.0001, respectively). The results show that the skew modulation was effective for the wetness perception even without the mean luminance change. Furthermore, over the whole range we tested, an increase in the luminance skew, or in the color saturation, monotonically increased the wetness impression (Figure 8c). It is noteworthy that the wetness rating approached the highest score (5) when the luminance skew was the most positive and the color saturation was the highest.

*u*′

*v*′ saturation, the one we used, is independent of luminance, whereas another definition, chroma

*C*

^{*}

*, is luminance-dependent as follows: where*

_{uv}*L*

^{*}

*u*

^{*}

*v*

^{*}color space and

*C*

^{*}

*uv*is defined as the distance of each pixel chromaticity in the

*u*

^{*}

*v*

^{*}plane from the white point (0, 0). Since the definitions of

*u*

^{*}and

*v*

^{*}are luminance-dependent, chroma

*C*

^{*}

*is also luminance-dependent. By combining Equations 4–7 with Equation 1, chroma*

_{uv}*C*

^{*}

*can be described as follows:*

_{uv}*C*

^{*}

*. Therefore, one might argue that the saturation enhancement was needed only to compensate for the decrease in chroma due to the luminance modulation. To confirm whether an increase in chroma is necessary to obtain a wetting effect, we continuously changed the saturation of the original image more widely than we did in Experiment 1b.*

_{uv}*u*′

*v*′ saturation of two surface images in McGill textures was continuously modulated by a histogram-matching method. The methods of the transformation were the same as in Experiment 1b. The mean

*u*′

*v*′ saturation of each image ranged from 0.065 to 0.52 in 0.065 steps. The standard deviation was set to 0.013. In addition, the skew of each luminance distribution was set to −0.8 and 0.8. The mean luminance of the surface image was set to 0.33. The standard deviation of the two surfaces was set to 0.09 and 0.07, respectively.

*C*

^{*}

*of each stimulus, and the vertical axis indicates the wetness rating averaged across observers. The vertical orange line in each panel shows the chroma*

_{uv}*C*

^{*}

*of the original image. The results show that when the mean chroma of the transformed image was around the original chroma, a strong wetting impression could not be obtained even if the luminance skew was modulated. In addition, the wetting impression increased even when the mean chroma of the transformed image was higher than that of the original image. These findings suggest that an increase in chroma is necessary to obtain a strong wetting effect.*

_{uv}*u*′

*v*′ chromaticity coordinates.

*I*

_{(p)}is the pixel value at the pixel position

*p*and

*g*is the discrete pixel level, which ranges from 0 to (G-1), and

*P*(

*g*) is the probability density of the level

*g*on the pixel histogram.

_{(p)}is the pixel hue at the pixel position

*p*and

*P*

_{gg′(k,l)}between samples of

*g*occurs at distance

*(k*,

*l)*from another pixel value

*g*′. In the present analysis, the distance

*(k*,

*l)*is within the 3 × 3 neighborhood of the pixel position (

*i, j*). For each (

*k*,

*l*), an entropy value

*H*

_{(k,l)}is defined as follows:

*H*

_{(k,l)}ranges from

*H*

_{(0,0)}to 2

*H*

_{(0, 0)}, which corresponds to the entropy defined in Equation 13 and twice the entropy, respectively, relative entropy

*k*,

*l*), the spatial entropy value

*I*) was set to 512.

*z*scores.

*Mitsuba*(Jakob, 2010). In a 3-D space, a camera was set 1.5 m from a plain gray surface 3 × 3 m in size, and 750 small cubes were randomly placed in front of it. The width, height, and depth of each cube were randomly selected in a range from 0.01 to 0.1 m. The bidirectional reflectance distribution function (BRDF) of each cube was Lambertian, and its reflectance was defined in the spectral domain. The mean reflectance of each cube was randomly modulated. However, to control the hue entropy of the output image, we used three entropy conditions—small, intermediate, and large entropy—for the peak in the reflectance spectra of the cube (Figure 11a, upper). For all the conditions, the reflectance spectra R

_{(λ)}of each cube were defined as follows: where the

*λ*is the wavelength of the light, which ranged from 380 to 780 nm,

*λ*is a peak in reflectance spectra, and

_{p}*σ*is the standard deviation, which was set to 20 nm. The

*a*

_{2}is a contrast factor, and it was set to 0.3. The

*a*

_{1}is pedestal reflectance, and it was randomly selected from 0.3 to 0.5. For the small entropy condition, the peak in reflectance spectra

*λ*of each cube was always 700 nm. For the middle entropy condition, a peak in reflectance spectra

_{p}*λ*of each cube was randomly selected from 546 or 700 nm. For the large entropy condition, the peak was randomly selected from 400 to 700 nm. The scenes were rendered by using the photon-mapping algorithm (Jensen, 2001). We used as the environment emitter an environment map downloaded from Bernhard Vogl's website (“At the Window (Wells, UK)”, http://dativ.at/lightprobes/), with the color converted to grayscale. The hue global entropy of the output image (Equation 13) in the small, intermediate, or large entropy condition was 2.8, 4.8, or 6.6, respectively. The transformed images were made (Figure 11a, lower) by applying the WET operator to the original images. Each luminance signal of the original image, which had been normalized in the range from 0 to 1, was transformed by using Equation 3. Each saturation signal of the image was defined as in Equation 1 and multiplied by a factor of three.

_{p}*Q*

_{(}

_{m}_{)}for each stimulus condition using the following formulas: where

*M*indicates the mean rating for the stimulus pair (

*m*,

*n*) or (

*n*,

*m*) and

*P*

_{(}

_{m,n}_{)}is the mean rating for

*m*over

*n*, averaged over the two orders.

*Q*

_{(}

_{m}_{)}is the perceived scale value of stimulus condition

*m*. We calculated the perceived scale values for each observer in the wetness rating and the darkness rating (Figure 11b).

*F*(2, 14) = 11.942,

*p*< 0.0001,

*F*(1, 7) = 189.766,

*p*< 0.0001, and

*F*(2, 14) = 8.244,

*p*= 0.004, respectively. The results show that the wetness perception of a scene image increased with the hue entropy of the image without changes in the luminance spatial entropy or in other factors. Specifically, the perceived wetness of the transformed image in the large entropy condition (Figure 11, open blue circle) was statistically higher than in the small and intermediate entropy conditions (Figure 11, open red circle and open green circle),

*t*(7) = 4.988,

*p*= 0.005 for small entropy, and

*t*(7) = 3.338,

*p*= 0.038 for intermediate entropy (Bonferroni-corrected paired

*t*test), but there was no significant difference between the wetness values of the transformed image in the small and intermediate entropy conditions,

*t*(7) = 0.2494,

*p*= 0.748 (Bonferroni-corrected paired

*t*test).

*F*(2, 14) = 26.440,

*p*< 0.0001, while the main effect of the saturation condition and the interaction were not,

*F*(1, 7) = 2.665,

*p*= 0.147, and

*F*(2, 14) = 1.519,

*p*= 0.253, respectively. For the darkness rating, the main effects of the luminance and the saturation conditions were statistically significant,

*F*(2, 14) = 73.534,

*p*< 0.0001 and

*F*(1, 7) = 15.078,

*p*= 0.006, respectively), while the interaction was not,

*F*(2, 14) = 0.382,

*p*= 0.689. The results show that decreasing the mean luminance (Figure 13, green) was not enough to significantly increase the wetting impression. Specifically, regardless of the magnitude of color saturation, the apparent darkness did,

*t*(7) = 6.938,

*p*= 0.0006 (Bonferroni-corrected paired

*t*test), but the apparent wetness did not,

*t*(7) = 2.130,

*p*= 0.21 (Bonferroni-corrected paired

*t*test) show significant differences between the original image (Figure 13, red) and the mean-decreased image (Figure 13, green). In contrast, the original WET operator (Figure 13, blue) produced a strong wetting effect,

*t*(7) = 5.341,

*p*= 0.003 (Bonferroni-corrected paired

*t*test).

*P*(

*S|*Dry) and

*P*(

*S|*Wet), are something like those shown in Figure 4. With regard to the prior probability, we assume that the scene is more likely to be dry than wet:

*P*(Dry)

*> P*(Wet). The scene

*S*is judge as wet when

*P*(Dry

*|S*)

*< P*(Wet

*|S*): where

*N*is the number of independent colors present in the scene, which usually increases with the hue entropy.

*P*(

*S*

_{high}|Dry) = 0.4,

*P*(

*S*

_{low}|Dry) = 0.6,

*P*(

*S*

_{high}|Wet) = 0.5,

*P*(

*S*

_{low}|Wet) = 0.5; and (c) the prior probability is higher for the dry condition than for the wet condition,

*P*(Dry) = 0.7 and

*P*(Wet) = 0.3. When there is only one color sample and it has high saturation, the model predicts dry, since

*P*(Dry

*|S*

_{high}) = 0.28 >

*P*(Wet

*|S*

_{high}) = 0.15. On the other hand, when there are five samples and all of them have high saturation, the model predicts wet, since

*P*(Dry

*|S*

_{high}) = 0.0072 <

*P*(Wet

*|S*

_{high}) = 0.0094.

*N*independent colors are randomly sampled from the dry/wet distribution. By repeating this 1,024 times, we obtained the frequency distribution histogram of the combination of Π

*P*(

*S*|Dry) and Π

_{x}*P*(

*S*|Wet) as shown in Figure 15. The blue histogram was obtained when

_{x}*N*color samples were randomly taken from the wet color distribution; the red one when they were randomly taken from the dry color distribution. Both histograms are aligned on a single line for each

*N*. The dotted line is the boundary above which the likelihood is higher for wet than for dry. When the number of samples is one (

*N*= 1), the blue and red histograms significantly overlap with each other. It is hard for the observer to tell whether the sample is from the wet or dry distribution only from the likelihood terms (and the prior produces a decision bias for dry). As the number of samples is increased, however, the blue and red distributions are gradually separated from each other, making it easier to tell whether the color samples are from the wet or dry distribution (unless a strong prior bias in favor of dry overrides the difference in the likelihood terms). The model nicely explains how an increase in hue entropy, or in the number of different color samples, elevates the reliability of color saturation as a cue to judge surface wetness.

*, 3 (5), 338–355.*

*i-Perception**, 3 (10), 1743–1751.*

*Journal of the Optical Society of America A**, 31 (10), 1–21.*

*Journal of Statistical Software**, 10 (4), 433–436.*

*Spatial Vision**, 14 (7), 1393–1411.*

*Journal of the Optical Society of America A**, 142 (3), 128–132.*

*IEEE Proceedings - Vision, Image and Signal Processing**, 28 (6–8), 765–774.*

*The Visual Computer**(pp. 229–238). New York: ACM.*

*Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques**. Natick, MA: A. K. Peters.*

*Realistic image synthesis using photon mapping**(pp. 273–281). New York: Springer.*

*Rendering Techniques '99**, 6 (6), 641–644.*

*Nature Neuroscience**. Cambridge, UK: Cambridge University Press.*

*Perception as Bayesian inference**, 96 (1), 307–312.*

*Proceedings of the National Academy of Sciences**, 27 (7), 1278–1280.*

*Applied Optics**, 27 (3), 49:1–49:8.*

*ACM Transactions on Graphics (TOG)**. Paper presented at the ACM SIGGRAPH 2006 Courses, Boston, MA.*

*Synthesis of material drying history: Phenomenon modeling, transferring and rendering**, 3 (1), 29–33.*

*Journal of the Optical Society of America A**, 146 (1), 21–38.*

*Journal of Experimental Biology**, 447 (7141), 206–209.*

*Nature**, 25 (3), 935–944.*

*ACM Transactions on Graphics**, 11 (11): 397, doi:10.1167/11.11.397. [Abstract]*

*Journal of Vision**, 33 (12), 1463–1473.*

*Perception**, 10 (4), 437–442.*

*Spatial Vision**, 72571X, 1–10.*

*Proceedings of SPIE 7257, Visual Communications and Image Processing 2009**(pp. 319–328). Hoboken, NJ: Wiley-Blackwell.*

*Proceedings of Pacific graphics**, 109, 209–220.*

*Vision Research**, 47 (259), 381–400.*

*Journal of the American Statistical Association**, 25 (4), 846–865.*

*Journal of the Optical Society of America A**, 66 (5), 288–295.*

*Optometry & Vision Science**, 58, 267–288.*

*Journal of the Royal Statistical Society: Series B Methodological**, 25 (3), 431–437.*

*Applied Optics**, 5 (6), 598–604.*

*Nature Neuroscience**. New York: Wiley.*

*Color science*(Vol. 8)*, 14 (10), 2608–2621.*

*Journal of the Optical Society of America A*