The color perceived to belong to the illumination of objects is often based on cues from the scene within which the objects are perceived, instead of being based on any view of the source itself. We present measurements of illuminant color estimation by human observers for moving, spectrally filtered spotlights. The results show that when only one illuminant is in the field of view, estimates of illuminant color are seriously biased by the chromaticities of the illuminated surfaces. When the surround of the spotlight is illuminated by a dimmer second light, spotlight matching moves toward veridical in most conditions. Simulations show that a gray-world model cannot be rejected as an adequate explanation for illuminant color estimation and provides as good a fit as a model that gives greater weights to the brightest surfaces. When the surrounding illuminant is brighter than the spotlight, the situation is similar to that of a moving filter. Spotlight matches are close to veridical, and the results can be fit by a model based on estimating both illuminants.

*A color perception in the illumination mode always accompanies the perception of an object color, yet it is not referred to a definite volume in the illuminant mode, nor is it the perception of the volume color of the space in which the object color is perceived. It is a color perceived to belong to the illumination of the object based on clues from the scene within which the object is perceived instead of being based on any view of the source itself*(Judd, 1961).

*x*,

*y*) and luminances measured at the maximum luminance were (0.60, 0.34) and 11.6 cd/m

^{2}for the R-gun, (0.28, 0.60), 34.2 cd/m

^{2}for the G-gun, and (0.15, 0.07) and 4.8 cd/m

^{2}for the B-gun.

*θ*

_{i}(

*λ*) seen under an illuminant with spectrum

*P*

_{j}(λ) was rendered by first calculating cone absorptions

*L*

_{ij}, M

_{ij}and

*S*

_{ij}for the Long-, Middle-, and Short-wavelength sensitive cones (Smith & Pokorny, 1975):

*P*

_{m}(λ) inside the convex hull formed by the linear combination of the Standard spotlight

*P*

_{t}(λ) the Equal Energy spotlight

*P*

_{n}(λ) and the two spotlights

*P*

_{t−1}(λ) and the two spotlights

*P*

_{t−1}(λ)

*P*

_{t+1}(λ) with spectra closest to the Standard spotlight (e.g., Magenta and Yellow for the Red Standard spotlight): The first switch varied Δ

_{c}between −1 and +1, adjusting the hue of the Match spotlight. The second switch adjusted Δ

_{n}from 0 up to the positive value greater than 1 where all of the overlaid achromatic materials remained displayable, adjusting the saturation of the Match spotlight. Δ

_{c}and Δ

_{n}were initially assigned random values on each trial. Stimuli on each trial were presented until the observer had finished the adjustment of the Match spotlight.

_{c}and Δ

_{n}set by the observer, each match can be converted into an illuminant spectrum, and compared with the illuminant spectrum of the Standard spotlight. Because the Match spotlight overlays achromatic surfaces, any spotlight that is metameric with it will also provide a good match to the Standard spotlight. This statement reflects the fact that observers do not have access to spectra but to functions of cone-absorptions that lead to perceived colors, and that all metameric lights will appear chromatically identical on the same achromatic surfaces. In addition, radiance versus wavelength does not provide a perceptually relevant metric to compare deviations from veridicality. We have, therefore, used chromaticities to compare Match spotlights to Standard and predicted spotlights.

*SD*along two axes: the axis of chromaticity variation due to Δ

_{n}and the chromatic axis orthogonal to this variation (scatter plots of all matches supported these axes as representative of the variance in the matches). Xs represent the chromaticity of the Standard spotlight (veridical matches). Symbols are coded according to the color of the Standard spotlight. The diamonds in each panel represent the mean chromaticity of the background surfaces. In the panel for the chromatically balanced background, the Xs fall on or inside the ±1

*SD*ellipses, and the mean matches deviate from veridical toward the achromatic point (intersection of dashed horizontal and vertical lines). For the biased backgrounds, very few of the ellipses for the empirical matches contain the corresponding Xs. The mean empirical matches deviate from the veridical predominantly in the same direction as the mean background chromaticity deviates from the achromatic point, suggesting a systematic biasing effect of background chromaticities on illuminant estimation. This bias is the motivation for the models presented in the next section.

*c*and

*a*will be used to denote the chromatic and achromatic sides, respectively, and caps will denote estimated quantities. It is apparent from these equations that if is a uniform spectrum, then . In other words, the estimate will be veridical when the background is balanced, but not when it is biased.

*n*= 0, the weighted model is identical to the gray-world model. As

*n*increases, the brighter materials are weighted more, and at

*n*= ∞, only cone catches from the brightest material are effective in the model.

_{c}and Δ

_{n}to achieve a spectrum

*P*

_{m}(

*λ*) on the achromatic side, so that

_{SD}ellipses are replotted on the same axes as Figure 5. The other symbols near the plusses show the model predictions for

*n*= 0 (inverted triangles representing the gray world model) and

*n*= 10 (upright triangle representing the brightness weighting model). Note that there are no free parameters in either model. The value of n that determines the selectivity of brightness weighting is fixed for each model. Two considerations apply in testing the models. First, any prediction that is more than 2

*SD*from the mean can be rejected as a good fit. By this criterion, hardly any of the predictions from either model are rejected. However, given the large sizes of the ellipses for this data set, this test is not very selective. The second consideration is that the pattern of predictions from a model should be close to the pattern of the empirical means. Both models do fairly well in this regard, and the brightness-weighted model (

*n*= 10) does not provide a significantly better explanation for illuminant color estimation. The predictions for

*n*= 1 were very similar to those for

*n*= 0, and the predictions for

*n*= 100 were very similar to those for

*n*= 10. The predominant discrepancy seems to be that the matched chromaticity is less saturated than the predicted chromaticity. This may be due to the desaturating effects of adaptation to chromatic variations, which, in this study, are present only on the side with the Standard filter (Krauskopf , Williams, & Heeley, 1982; Webster & Mollon, 1997; Zaidi, Spehar, & DeBonet, 1997, 1998). This possibility points out that a proper brightness-weighting model should incorporate better estimates of the brightness and color appearance of different surfaces, and both estimates are likely to be nonlinear functions of cone-absorptions. Equations 9–11 are just an approximation to this class of models. Note that, for the biased backgrounds, the model predictions are not good estimates of the veridical matches shown in Figure 5. It is worth pointing out that for

*n*= ∞, we are explicitly not claiming that the brightest surface appears as the illumination source. Identification of the illumination source depends on geometric factors like fuzzy borders (Zavagno, 1999), which are not present in our displays. In their gamut matching simulations, Tominaga et al. (2001) found it useful to scale the intensity of all images to keep them within similar ranges; in the human visual system, retinal processes like photoreceptor adaptation and center surround receptive fields provide automatic intensity scaling for later visual processing.

*E*

_{c}(

*λ*) on the exposed region of the chromatic background, and then assume that the spotlight is added on to the dim light, so that

*P*

_{j}(

*λ*) is equal to

*F*

_{j}(

*λ*) +

*E*

_{c}(

*λ*), where

*F*

_{j}(

*λ*) is the added spectrum. The observer can thus estimate from are estimated from Equations 9–11, and for

*E*, a uniform spectrum: where

*n*= 0 and 10 (downward and upward pointing triangles, respectively). Note that there are no free parameters in this model. The value of

*n*, which determines the selectivity of brightness weighting, is fixed for each model. Many of the points from the gray-world (

*n*= 0) hypothesis come close to the data points. The predictions for

*n*= 1 were very similar to those for

*n*= 0. The predictions of the brightness-weighted model (

*n*= 10) do not differ greatly from the gray-world model. The predictions for

*n*= 100 were very similar to that for

*n*= 10. The 1

*SD*ellipses are smaller for Experiment 2 than for Experiment 1 (possibly due to a larger number of repetitions per condition), and in almost all of the cases, neither of the two models can be rejected. It is worth pointing out that Model 2 reduces to Model 1 when the surround illuminant

*E*(

*λ*) is equal to zero.

*E*

_{c}(

*λ*) on the exposed region of the chromatic background, and then assume that the spotlight

*P*

_{j}(

*λ*) is equal to

*F*

_{j}(

*λ*)*

*E*

_{c}(

*λ*), where ‘*’ is wavelength-by-wavelength multiplication, and

*F*

_{j}(

*λ*) is the spectrum that filters the illuminant common to overlaid and exposed regions. if the estimates and were available, then observers could simply estimate , where ‘/’ is wavelength-by-wavelength division. It is unlikely that observers could estimate these complete spectra. However, these spectral estimates are not necessary because the filter cone-coordinates can be estimated in a simpler manner based on the empirical observations that illuminants and filters overlaid on everyday materials do not alter the rank orders of L, M, and S cone absorptions (Dannemiller, 1993; Foster & Mascimento, 1994; Nascimento & Foster, 1997; Zaidi et al., 1997; Westland & Ripamonti, 2000; Zaidi, 2001; Khang & Zaidi, 2002). In other words, cone-catches under equal-energy light and cone-catches under another light or filter are related by the same multiplicative constant for all materials. The observer can thus estimate from the ratios . [

*Note that without this assumption, L(F) will be equal to L(P/E), not L(P)/L(E)*]. are estimated from Equations 9–11, from Equations 16–18 (Note that for Experiment 3,

*E*(

*λ*) is 5 times the value for Experiment 2).

*P*

_{m}can be obtained by where the cone estimates for

*E*

_{a}the illuminant on the achromatic side are given by Equations 29–31. (Note that for Experiment 3,

*E*(

*λ*) is 5 times the value for Experiment 2.)

*n*= 0 and 10 (downward and upward pointing triangles, respectively). Note that there are no free parameters in this model. The value of

*n*which determines the selectivity of brightness weighting is fixed for each model. Many of the points from the gray-world (

*n*= 0) hypothesis come close to the data points, particularly to the empirical matches that were close to veridical. The

*n*= 0 model here is mathematically identical to the model for equating mean cone-ratios that was presented in Khang and Zaidi (2002), so the models in this work provide a perceptual interpretation for the mechanistic models in Khang and Zaidi (2002). The predictions for

*n*= 1 were very similar to those for

*n*= 0. The predictions of the brightness-weighted model (

*n*= 10) do not differ greatly from the gray-world model, but do provide a slightly better fit to some data points. The predictions for

*n*= 100 were very similar to that for

*n*= 10. The models’ predictions are generally close to the veridical matches; therefore, the predominant discrepancies from the predictions occur for matches that were far from veridical, and in these cases the matched chromaticity is less saturated than the predicted chromaticity.

*Channels in the visual nervous system: Neurophysiology, psychophysics and models*. London: Freund Publishing House, Ltd.

*Journal of Experimental Psychology*, 58(4), 267–274. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 15(2), 307–325. [PubMed] [CrossRef]

*Journal of the Optical Society of America, A*, 14(7), 1393–1411. [PubMed] [CrossRef]

*Movshon computational models of visual processing*. Cambridge, MA: MIT Press.

*Journal of the Franklin Institute*, 310, 1–26. [CrossRef]

*Vision Research*, 34, 1489–1508. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 33, 131–140. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 10(10), 2166–2180. [PubMed] [CrossRef]

*Journal of the Optical Society of America A*, 11(9), 2398–2400. [PubMed] [CrossRef]

*Journal of the Optical Society of America A*, 3, 1662–1672. [PubMed] [CrossRef]

*Perception*, 29, 911–926. [PubMed] [CrossRef] [PubMed]

*Proceedings of the Fifth Color Imaging Conference*(pp. 6-11). Springfield, VA: Society for Imaging Science and Technology.

*International Journal of Computer Vision*, 5, 5–36. [CrossRef]

*Proceedings of the Royal Society of London B*, 257, 115–21. [PubMed] [CrossRef]

*Perception*, 13(1), 5–19. [PubMed] [CrossRef] [PubMed]

*Nature*, 415(6872), 637–640. [PubMed] [CrossRef] [PubMed]

*Perception*, 29, 1169–1184. [PubMed] [CrossRef] [PubMed]

*Helmoltz’s treatise on physiological optics*( Southall, J.P., Ed.). New York: Dover. (Original work published 1866)

*American Society for Testing and Materials Special Technical Publication*, 297, 1–15.

*Beitr Problemgeschichte Ps (Bühler Festschr)*(pp. 1–77). Jena: Fischer.

*World of colour*. New York: Johnson Reprint Corp.

*Journal of Vision*, 2(6), 451–466, http://journalofvision.org/2/6/3, doi:10.1167/2.6.3. [PubMed][Article] [CrossRef] [PubMed]

*Vision and Neuroscience*, 14(6), 1061–1072. [PubMed] [CrossRef]

*Kodak Wratten filters: For scientific and technical use*. Rochester, NY: Eastman Kodak Co.

*Principles of Gestalt psychology*. New York: Harcourt & Brace.

*Vision Research*, 22(9), 1123–1131. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 26, 7–21. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 3, 1694–1699. [PubMed] [CrossRef]

*Journal of the Optical Society of America A*, 18(11), 2679–2691. [PubMed] [CrossRef]

*Perception*, 31, 151–159. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America, A*, 13, 1315–1324 [CrossRef]

*Journal of the Optical Society of America, A*, 69, 1183–1186. [PubMed] [CrossRef]

*Colour vision: From light to object*. Oxford: Oxford University Press.

*Journal of the Optical Society of America A*, 3(1), 29–33. [PubMed] [CrossRef]

*Philosophical Transactions of the Royal Society of London B*, 355, 1243–1248. [PubMed] [CrossRef]

*Proceedings of the Royal Society of Londo, B*, 264, 1395–1402. [PubMed] [CrossRef]

*Journal of the Optical Society of America, A*, 17(2), 225–231. [PubMed] [CrossRef]

*Psychological Sciences*, 13(2), 142–149. [PubMed] [CrossRef]

*Vision Research*, 15, 161–171. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4(9), 693–710, http://journalofvision.org/4/9/3/, doi:10.1167/4.9.3. [PubMed][Article] [CrossRef] [PubMed]

*Tominaga, S., Ebisui, S., & Wandell, B. A.*(2001). Scene illuminant classification: Brighter is better.

*Journal of the Optical Society of America A*, 18(1), 55–64. [PubMed]

*Journal of the Optical Society of America A*, 6, 576–584. [CrossRef]

*Color Research and Application*, 19, 4–9.

*Vision Research*, 37, 3283–3298. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 17, 255–264. [PubMed] [CrossRef]

*Experimental psychology*. London: Methuen.

*Vision Research*, 41, 2581–2600. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 15(7), 1767–1776. [PubMed] [CrossRef] [PubMed]

*Color Research and Application*, 26, S192–S200. [CrossRef]

*Journal of the Optical Society of America A*, 14, 2608–2621. [PubMed] [CrossRef]

*Journal of the Optical Society of America A*, 15, 23–32. [PubMed] [CrossRef]

*Perception*, 28(7), 835–838. [PubMed] [CrossRef] [PubMed]