Vision is difficult because images are ambiguous about the structure of the world. For object color, the ambiguity arises because the same object reflects a different spectrum to the eye under different illuminations. Human vision typically does a good job of resolving this ambiguity—an ability known as color constancy. The past 20 years have seen an explosion of work on color constancy, with advances in both experimental methods and computational algorithms. Here, we connect these two lines of research by developing a quantitative model of human color constancy. The model includes an explicit link between psychophysical data and illuminant estimates obtained via a Bayesian algorithm. The model is fit to the data through a parameterization of the prior distribution of illuminant spectral properties. The fit to the data is good, and the derived prior provides a succinct description of human performance.

*contextual images*used to define these contexts. The first three images are graphics simulations of the same collection of objects rendered under three different illuminants. The left-hand image was rendered under a typical daylight (CIE D65, chromaticity shown as open black circle), whereas the second image was rendered under a more yellowish daylight (spectrum constructed from CIE linear model for daylights, chromaticity shown as open blue circle). The third image was also a rendering of the same set of surfaces, but the illuminant had a chromaticity atypical of daylight (chromaticity shown as open red circle). The right-hand image was rendered under essentially the same illuminant as the second image (open green circle), but a different background surface was simulated. This background was chosen so that the light reflected from it matched that reflected from the background in the left-hand image.

*i*th experimental image with a two-dimensional vector

**x**

_{ i}

^{ a}containing the

*u*′ and

*v*′ coordinates of the stimulus that appeared achromatic. This representation characterizes the spectrum reaching the observer from the test patch (i.e., the proximal stimulus), not the reflectance properties of the simulated surface. The solid black, blue, red, and green circles in the CIE

*u*′

*v*′ chromaticity diagram in Figure 2 plot the

**x**

_{ i}

^{ a}for the four sample images. It is clear that the achromatic chromaticity varies with context. Our modeling goal is to predict each

**x**

_{ i}

^{ a}from a description of the corresponding contextual image.

^{1}We can, however, use the achromatic and illuminant chromaticities to determine the spectral reflectance function of an

*inferred achromatic surface*. Such a reflectance function is shown in Figure 2. This function was chosen so that when the simulated illuminant reflects from it, the chromaticity of the reflected light matches the measured achromatic chromaticity.

^{2}This equality is indicated by the overlay of the solid black circle and the small open black circle.

*only*the illuminant is varied and is the basis for assertions in the literature that the human visual system is approximately color constant (Brainard, 2004).

*u*′

*v*′ achromatic chromaticities obtained in the experiment and the constancy predictions obtained through this surface.

^{3}The predictions for each scene are obtained by computing the chromaticity of the light that would be reflected from the inferred achromatic surface under the known illuminant spectrum for that scene. The best fit to the data is summarized in Figure 3. As one might expect from the sample data shown in Figure 2, the data set is poorly fit by a model that embodies the assumption that the visual system is perfectly color constant.

**x**

_{ i}

^{ a}, given the contextual images.

*mechanistic*and seeks to formulate models that incorporate known features of the neural processing chain. Examples of models in this category are those that invoke ideas such as gain control (e.g., Burnham, Evans, & Newhall, 1957; Stiles, 1967; von Kries, 1902/1970), contrast coding (e.g., Shevell, 1978; Walraven, 1976), and color opponency (e.g., Hurvich & Jameson, 1957; Webster & Mollon, 1995) to explain various color appearance phenomena. A second approach abstracts further and considers the problem as one of characterizing the properties of a black-box system. Here, the effort is on testing principles of system performance (such as linearity) that allow prediction of color appearance in many contexts based on measurements from just a few (Brainard & Wandell, 1992; Krantz, 1968). A third approach is

*computational*in the sense defined by Marr (1982), with the idea being to develop models by thinking about what useful function color context effects serve and how a system designed to accomplish this function would perform. In the case of color, one asks how could a visual system process image data to recover descriptions of object surface reflectance that are stable across changes in the scene. We have taken this approach in much of our prior work on color and lightness constancy (Brainard, Brunt, & Speigle, 1997; Brainard, Kraft, & Longère, 2003; Brainard, Wandell, & Chichilnisky, 1993; Bloj et al., 2004; Speigle & Brainard, 1996; see also Adelson & Pentland, 1996; Bloj, Kersten, & Hurlbert, 1999; Boyaci, Doerschner, & Maloney, 2004; Boyaci, Maloney, & Hersh, 2003; Doerschner, Boyaci, & Maloney, 2004; Land, 1964; Maloney & Yang, 2001; McCann, McKee, & Taylor, 1976) and develop it further here. It is useful to keep in mind that the various approaches are not mutually exclusive and that the distinctions between them are not always sharp.

*E*(

*λ*), and each surface is characterized by its reflectance function

*S*(

*λ*). The spectrum of the light reaching the eye is given by

*C*(

*λ*) =

*E*(

*λ*)

*S*(

*λ*).

^{4}If a visual system has access to an estimate of the illuminant,

*E*˜(

*λ*), it is straightforward for it to produce a representation of object color based on an estimate of surface reflectance

*S*˜(

*λ*) =

*C*(

*λ*)/

*E*˜(

*λ*).

^{5}As long as

*E*˜(

*λ*) ≈

*E*(

*λ*), this representation will be stable across changes in context. To put it another way, a visual system with access to a good estimate of the actual illuminant

*E*(

*λ*) can achieve approximate color constancy by computing an estimate of surface reflectance

*S*˜(

*λ*) and using this estimate to determine color appearance.

^{6}If the human visual system applies this strategy, but with illuminant estimates that sometimes deviate substantially from the actual illuminants, there will be commensurate deviations from constancy.

*E*˜(

*λ*). It should not be surprising that illuminant estimates are sometimes in error: The reason that constancy is intriguing rests on the fundamental difficulty of achieving it across all possible scenes. We refer to

*E*˜(

*λ*) as the

*equivalent illuminant*because, in the model, it replaces the physical illuminant in the constancy calculations. More generally, we refer to the broad class of models pursued here as

*equivalent illuminant models*.

*explicit*illuminant judgments tap an equivalent illuminant that may be used to predict surface lightness.

*implicit*equivalent illuminant for that context. The question is then whether the parametric variation of appearance within each context is well fit by a calculation that estimates surface reflectance using the equivalent illuminant for that context. This work provides support for the general equivalent illuminant approach but does not provide guidance about how the context determines the equivalent illuminant.

*u*′

*v*′ chromaticity coordinates of the illuminant. We denoted these by the two-dimensional column vector

**x**from the data available in the retinal image. We denoted these data by the vector

**y**. The entries of

**y**consisted of the L-, M-, and S-cone quantal absorption rates at a series of

*N*

_{ y}image locations

**x**from

**y**, we should calculate the

*posterior probability*

*c*is a normalizing constant that depends on

**y**but not on

**x**,

*p*(

**y**∣

**x**) is the

*likelihood*that data

**y**will be observed when the illuminant chromaticity is in fact

**x**, and

*p*(

**x**) is the

*prior probability*of an illuminant with chromaticity

**x**occurring.

**x**of the illuminant chromaticity, we maximized the posterior probability

*c*is independent of

**x**, it could be ignored in the maximization.

**y**given an illuminant

**x**depends on the imaging model, the spectrum of the illuminant, and the surfaces that the illuminant reflects from. We assumed that a single diffuse illuminant reflects from

*N*

_{ y}distinct matte surfaces. Although this model is simplified relative to the imaging model used to simulate the experimental images, it captures the broad dependence of the spectral properties of the image data on the spectrum of the illuminant and spectral reflectance of the surfaces.

**e**whose entries denote the illuminant power for a set of wavelength bands that sampled the visible spectrum (Brainard, 1995). We represented each individual surface by a column vector

**s**

_{i}(1 ≤

*i*≤

*N*

_{y}) whose entries specified the reflectance of the surface for the same set of wavelength bands. The reflected light is absorbed by L-, M-, and S-cone photopigment to produce the mean isomerization rates corresponding to each surface. These mean rates may be computed from

**e**,

**s**

_{i}, and estimates of the photopigment spectral absorption curves using standard methods (Brainard, 1995). We assumed that the observed data specified by

**y**were the mean absorption rates corresponding to each surface perturbed by independent zero-mean Gaussian noise. The standard deviation of this noise for each cone class was taken to be 1% of the mean value of

**y**for that cone class. These assumptions allowed us to compute explicitly the likelihood of the data

*p*(

**y**∣

**e**,

**s**

_{1}, …,

**s**

_{Ny}) given

**e**and the vectors

**s**

_{i}, 1 ≤

*i*≤

*N*

_{y}. To find

*p*(

**y**∣

**x**), we need to compute

**y**

_{ i}refers to the three entries of

**y**that depended on the light reflected from the

*i*th surface. In principle, the limits of integration in Equation 2 are [0,∞] for each entry of

**e**and [0,1] for each entry of the

**s**

_{ i}, as lights do not have negative power and we consider only nonfluorescent surfaces that do not reflect more light at any wavelength than is incident at that wavelength. In practice, as described below, the support of each integral is localized and we obtained an approximation with a combination of analytic and numerical methods.

**s**occurring in a scene was described by a multivariate normal distribution over the weights of a linear model (Brainard & Freeman, 1997; Zhang & Brainard, 2004). That is,

**B**

_{s}is a matrix with

*N*

_{s}columns, each of which describes a spectral basis function,

**w**

_{s}is an

*N*

_{s}dimensional column vector,

**u**

_{s}is the mean of

**w**

_{s},

**K**

_{s}is the covariance of

**w**

_{s}, and the notation ∼ indicates “distributed as.” Equation 3 allowed us to compute the probability

*p*(

**s**) for any

**s**of the form

**s**=

**B**

_{s}

**w**

_{s}explicitly, in two steps. First, we found the weights

**w**

_{s}corresponding to

**s**. Then, we evaluated

*p*(

**w**

_{s}) according to the normal density with mean

**u**

_{s}and covariance

**K**

_{s}. We took the probability for any

**s**not of the form

**s**=

**B**

_{s}

**w**

_{s}to be zero.

*N*

_{ s}= 3, and for the main calculations we chose

**B**

_{ s},

**u**

_{ s}, and

**w**

_{ s}, as described by Brainard and Freeman (1997). Given this surface prior, each of the integrals in brackets in Equation 2 could be approximated analytically using the methods described by Brainard and Freeman (1994) and Freeman (1993).

**B**

_{ e}to specify the CIE three-dimensional linear model for daylight. This linear model includes CIE D65 and other CIE standard daylights. Given the linear model constraint, the chromaticity of an illuminant determines the linear model weights up to a free multiplicative scale factor through a straightforward colorimetric calculation, which we denoted as

**w**

_{ e}=

*af*(

**x**). The calculation proceeds in two steps. First, we used the chromaticity

**x**to find a set of

*XYZ*tristimulus coordinates with the same chromaticity (e.g., Wyszecki & Stiles, 1982). Then, we used the known color matching functions and the linear model

**B**

_{e}to recover

**w**

_{e}from the tristimulus coordinates (e.g., Brainard, 1995). Because the recovery of tristimulus coordinates from chromaticity is unique only up to a multiplicative scale factor, the same is true for the recovery of

**w**

_{e}from chromaticity.

*a*was uniform so that illuminants with the same chromaticity were equally likely. This is expressed by

*p*(

**e**∣

**x**) = 0 for any illuminant that had negative power in any wavelength band. Our assumption about illuminant spectra allowed us to approximate the outer integral in Equation 2 numerically. Doing so required choosing limits on the range of the scale factor

*a*. Given the linear model constraints on illuminant and surface spectra, choosing

*a*and

**x**allowed us to recover a unique set of surface reflectance spectra that were perfectly consistent with the observed data

**y**. We found the minimum value of

*a*such that none of these recovered surface reflectance functions exceeded 1 at any wavelength, and we used this as the lower limit of integration. We set the upper limit of integration at three times this value. This upper limit was chosen after examining the behavior of the inner integrals (those in brackets) of Equation 2—they fall off rapidly with increasing

*a,*so that contributions to Equation 2 from values of

*a*above the upper limit are negligible.

**u**

_{ x}at the chromaticity of CIE illuminant D65. Thus, the illuminant prior had three free parameters, the independent entries of covariance matrix

**K**

_{ x}. This parameterization of the illuminant prior is capable of capturing the broad features of a set of measurements of daylight spectra ( Figure 5; DiCarlo & Wandell, 2000).

**x**˜, we used numerical search and found the value of

**x**that maximized the posterior ( Equation 1).

*achromatic chromaticities*) in the context of different images. Recall that

**x**

_{ i}

^{ a}represents the psychophysically measured achromatic chromaticity for the

*i*th scene, and let

**x**˜

_{ i}represent the illuminant chromaticity estimated from the image of the

*i*th scene. One way to predict the achromatic chromaticities would be to assume directly that stimuli with chromaticities matched to the observer's estimate of the illuminant appear achromatic. This assumption strikes us as theoretically unmotivated, as achromaticity is a property of surface appearance. Instead, to link

**x**

_{ i}

^{ a}to

**x**˜

_{ i}, we found the reflectance function of a single hypothetical surface (the

*inferred achromatic surface*) such that the light that would have been reflected from it under each estimated illuminant

**x**˜

_{ i}had a chromaticity as close as possible to the corresponding achromatic chromaticity

**x**

_{ i}

^{ a}. Once we chose a single inferred achromatic surface (represented by its linear model weights), we mapped the illuminant estimates to predicted achromatic chromaticities for every scene. This is the same method that we used above (see the Relation between the data and color constancy section) to make constancy predictions, with the one change that the actual illuminant chromaticity was replaced by the equivalent illuminant chromaticity.

**e**˜

_{ i}) from its chromaticity

**x**˜

_{ i}, up to an undetermined scale factor. Given

**e**˜

_{ i}and the linear model weights for the achromatic surface (

**w**

_{ s}

^{ a}), we computed the spectrum of the light that would be reflected from the inferred achromatic surface, again up to an undetermined factor. This then yielded the chromaticity

**x**˜

_{ i}

^{ a}of the light that would be reflected from the inferred achromatic surface under the estimated illuminant. Note that the computed chromaticity

**x**˜

_{ i}

^{ a}was independent of the undetermined scale factor.

**x**˜

_{ i}

^{ a}served as our predictions of the psychophysically measured achromatic chromaticities

**x**

_{ i}

^{ a}. These predictions depend on the inferred achromatic surface weights

**w**

_{ s}

^{ a}, and we used numerical search to find the single set of weights that minimized the fit error to the entire data set,

**x**˜

_{ i}

^{ a}depend only on the relative inferred achromatic surface reflectance function, we fixed the first entry of

**w**

_{ s}

^{ a}to be 1 and searched over the remaining two entries.

**w**

_{ s}

^{ a}were chosen so that the prediction error for the standard context was zero.

*u*′

*v*′ chromaticity coordinates of 10,760 daylights measured by DiCarlo and Wandell (2000). Consistent with previous reports, these cluster along the CIE daylight locus (black line in both panels). To capture this regularity in broad terms, we modeled the prior probability of illuminant chromaticities as a bivariate normal distribution (see the Modeling methods section). The blue ellipse in the right panel shows an isoprobability contour of the normal distribution we chose to capture the broad statistics of the daylight chromaticity distribution. We chose the mean of this distribution as the chromaticity of CIE illuminant D65 and the covariance matrix as that of the DiCarlo and Wandell measurements. We refer to this illuminant prior as the

*daylight prior*. The prior over scene surfaces and other details of the algorithm are provided in the Modeling methods section.

^{7}In brief, we used numerical search to infer an achromatic surface that minimized the error between the chromaticity of the light that would be reflected from it under each estimated illuminant

**x**˜

_{i}and the corresponding psychophysically measured achromatic chromaticity

**x**˜

_{i}

^{a}. The resulting predictions are shown in Figure 6. The top panel shows predictions for the same sample contexts as in Figure 2. The middle and bottom panels summarize the prediction quality in the same format as Figure 3. The predictions are, on aggregate, improved slightly from those obtained using the actual chromaticities of the scene illuminants. The value of the error measure

*ɛ*

_{a}is 0.0157 for the predictions based on the actual illuminant chromaticities and 0.0114 for the predictions based on the Bayesian estimates obtained with the daylight prior.

**K**

_{ x}and repeated the calculations for many choices of

**K**

_{ x}. We identified the choice of

**K**

_{ x}that led to the best prediction results (minimum

*ɛ*

_{ a}). We then varied the spatial sampling, choosing the standard deviation of the Gaussian weighting function that minimized

*ɛ*

_{ a}. The isoprobability contour corresponding to the best

**K**

_{ x}is shown as the red ellipse in Figure 5. This derived prior distribution is considerably broader than the daylight prior. The best standard deviation for the Gaussian weighting function was 4° of visual angle. Figure 7 shows that the quality of the resulting fit is very good. The predictions for the sample conditions (top panel) all lie close to the measurements, while the points in the summary graphs (middle and bottom panels) are generally near the diagonal. Most of the improvement in

*ɛ*

_{ a}was due to the change in prior, not in spatial sampling.

^{8}

*u*′

*v*′ representation. The value of our work for constraining models of neural processing lies in its ability to capture the relation between stimulus and percept that a model of the neural implementation must explain.

^{1}Some readers may have the intuition that color constancy implies that the measured achromatic chromaticity should be the same as that of the illuminant. Although it is true that data which conform to this intuition are consistent with perfect constancy, the converse does not hold. The concept of constancy is that surface color appearance is stable across changes in viewing context. It therefore does not carry any necessary implications about the data from a single context considered in isolation.

^{2}Because of metamerism (Brainard, 1995; Wyszecki & Stiles, 1982), there are multiple surface reflectance functions that have the property that the chromaticity of the light reflected from them under the simulated illuminant matches the measured achromatic chromaticity. The surface shown was obtained by imposing a linear model constraint, so that its reflectance function is typical of naturally occurring surface reflectances (see Modeling methods, Jaaskelainen, Parkkinen, & Toyooka, 1990; Maloney, 1986).

^{3}Although determining the inferred achromatic surface from the data of a single context is conceptually simpler than fitting it to all of the data, the more general fitting procedure ensures that the quality of the constancy model is not underestimated simply because the predictions were extrapolated from parameters set by a single datum. In the experiment, none of the seventeen contexts had a privileged status. Comparison of the spectral reflectance function of the inferred achromatic surface shown in Figure 2 with that of Figure 3 reveals that, in the event, the difference between the two methods is very small.

^{4}This image model is correct for matte surfaces viewed under spatially diffuse illumination. The reflectance of most real materials has a geometric dependence (Fleming, Dror, & Adelson, 2003; Foley, van Dam, Feiner, & Hughes, 1990), and real illuminations are not diffuse (Debevec, 1998; Dror, Willsky, & Adelson, 2004). The approximation used here allows us to make progress; geometrical effects will need to be taken into account as we extend our thinking about lightness and color towards richer scene geometries (see Bloj et al., 2004; Boyaci et al., 2003, 2004; Doerschner et al., 2004; Ripamonti et al., 2004; Xiao & Brainard, 2006).

^{5}The actual estimation is more complicated, as the visual system does not have direct access to

*C*(

*λ*) but instead must use the responses of the L-, M-, and S-cones to make the estimate. There are well-established methods for doing so (Brainard, 1995; Wandell, 1987). The tilde in the notation distinguishes estimated quantities from their physical counterparts.

^{6}Even for a visual system perfect with access to the actual illuminant, errors in constancy may still occur because of metamerism: two surfaces that produce the same cone responses under one illuminant may produce different cone responses under a changed illuminant (Brainard, 1995; Wyszecki & Stiles, 1982).

^{7}As described in Modeling methods, the calculation of inferred achromatic surface and achromatic predictions depends on the illuminant spectrum only through its chromaticity.

^{8}The value of

*ɛ*

_{ a}with best illuminant prior and spatial weighting was 0.0044. The value of

*ɛ*

_{ a}obtained with the best illuminant prior when points were drawn from the entire image was 0.0057. We also explored whether variation in the standard deviation of the Gaussian spatial weighting function could produce a similar fit with the daylight prior, and whether varying the surface prior parameters allowed a good fit with the daylight prior. Across all variations we explored, the minimum value of

*ɛ*

_{ a}obtained with the daylight prior was 0.0110, over twice that obtained with the broadened prior.

*The new cognitive neurosciences*. –351). Cambridge, MA: MIT Press.

*Visual perception: Computation and psychophysics*. (pp. 409–423). New York: Cambridge University Press.

*Computer vision systems*. (pp. 3–26). New York: Academic Press.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 18, 2419–2429. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology*, 58, 267–274. [PubMed] [CrossRef] [PubMed]

*Statistical decision theory and Bayesian analysis*. New York: Springer-Verlag.

*Journal of Vision*, 4, (9), 735–746, http://journalofvision.org/4/9/6/, doi:10.1167/4.9.6. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 4, (9), 664–679, http://journalofvision.org/4/9/1/, doi:10.1167/4.9.1. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 3, (8), 541–553, http://journalofvision.org/3/8/2/, doi:10.1167/3.8.2. [PubMed] [Article] [CrossRef] [PubMed]

*Handbook of optics: Volume 1. Fundamentals, techniques, and design*. –54). New York: McGraw-Hill.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 15, 307–325. [PubMed] [CrossRef] [PubMed]

*International encyclopedia of the social and behavioral sciences*. (4, pp. 2256–2263). Oxford, UK: Elsevier.

*The visual neurosciences*. (1, pp. 948–961). Cambridge, MA: MIT Press.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 14, 2091–2110. [PubMed] [CrossRef] [PubMed]

*Paper presented at SPIE Conference on Human Vision, Visual Processing, and Display*, 364–376, San Jose, CA.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 14, 1393–1411. [PubMed] [CrossRef] [PubMed]

*Colour perception: Mind and the physical world*. (pp. 307–334). Oxford: Oxford University Press.

*Journal of the Optical Society of America A, Optics and Image Science*, 3, 1651–1661. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 9, 1433–1448. [PubMed] [CrossRef] [PubMed]

*Current Directions in Psychological Science*, 2, 165–170. [CrossRef]

*Journal of the Franklin Institute*, 310, 1–26. [CrossRef]

*Journal of the Optical Society of America*, 47, 35–42. [CrossRef]

*Visual Neuroscience*, 13, 591–596. [PubMed] [CrossRef] [PubMed]

*Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography*.

*Vision Research*, 40, 2885–2903. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (2), 57–81, http://journalofvision.org/4/2/1/, doi:10.1167/4.2.1. [PubMed] [Article] [CrossRef] [PubMed]

*Paper presented at IS&T/SID Eight Color Imaging Conference*, 91–96, Scottsdale, AZ.

*Journal of Vision*, 4, (2), 92–105, http://journalofvision.org/4/2/3/, doi:10.1167/4.2.3. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 4, 821–837, http://journalofvision.org/4/9/11/, doi:10.1167/4.9.11. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 10, 2148–2165. [PubMed] [CrossRef]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 10, 2166–2180. [PubMed] [CrossRef]

*Paper presented at IS&T/SID Fifth Color Imaging Conference*, 6–11, Scottsdale, AZ.

*Journal of Vision*, 3, (5), 347–368, http://journalofvision.org/3/5/3/, doi:10.1167/3.5.3. [PubMed] [Article] [CrossRef] [PubMed]

*Computer graphics: Principles and practice*. Reading, MA: Addison-Wesley.

*Exploiting the generic view assumption to estimate scene parameters*.

*Color constancy computation in near-Mondrian scenes using a finite dimensional linear model*.

*Psychological Review*, 96, 267–314. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 5, 508–510. [PubMed] [Article] [CrossRef] [PubMed]

*Seeing black and white*. Oxford: Oxford University Press.

*Psychological Review*, 106, 795–834. [PubMed] [CrossRef] [PubMed]

*Signal detection theory and psychophysics*. New York: John Wiley and Sons.

*Journal of the Optical Society of America*, 38, 1025–1032. [CrossRef] [PubMed]

*Psychological Review*, 64, 384–404. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 7, 725–730. [CrossRef]

*Handbook of sensory physiology*. (VII pp. 568–581). Berlin: Springer.

*Current Opinion in Neurobiology*, 13, 150–158. [PubMed] [CrossRef] [PubMed]

*Perception as Bayesian inference*. Cambridge: Cambridge University Press.

*Proceedings of the National Academy of Sciences of the United States of America*, 96, 307–312. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 5, 1–48. [CrossRef]

*CIBA foundation symposium on color vision*. (pp. 217–227). Boston: Little, Brown and Company.

*Vision Research*, 26, 7–21. [PubMed] [CrossRef] [PubMed]

*Bayesian statistics*. London: Oxford University Press.

*Perception*, 23, 1007–1023. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 3, 1673–1683. [PubMed] [CrossRef]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 3, 29–33. [PubMed] [CrossRef]

*Colour perception: From light to object*. (pp. 335–358). Oxford: Oxford University Press.

*Probabilistic models of the brain: Perception and neural function*. (pp. 13–36). Cambridge, MA: MIT Press.

*Vision*. San Francisco: W H Freeman.

*Vision Research*, 16, 445–458. [PubMed] [CrossRef] [PubMed]

*Why we see what we do: An empirical theory of vision*. Sunderland, MA: Sinauer.

*Probabilistic models of the brain: Perception and neural function*. Cambridge, MA: MIT Press.

*Journal of Vision*, 4, (9), 747–763, http://journalofvision.org/4/9/7/, doi:10.1167/4.9.7. [PubMed] [Article] [CrossRef] [PubMed]

*Psychological Science*, 13, 142–149. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 18, 1649–1661. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 13, 436–451. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 16, 2370–2376. [PubMed] [CrossRef] [PubMed]

*Journal of the Colour Group*, 11, 106–123.

*Nature Neuroscience*, 9, 578–585. [PubMed] [CrossRef] [PubMed]

*Treatise on physiological optics*. New York: Dover.

*Sources of color vision*. (pp. 109–119). Cambridge, MA: MIT Press (Original work published 1902))

*Vision Research*, 16, 289–295. [PubMed] [CrossRef] [PubMed]

*IEEE Transactions on Pattern Analysis and Machine Intelligence*, 9, 2–13. [CrossRef] [PubMed]

*Nature*, 373, 694–698. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 5, 598–604. [PubMed] [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences of the United States of America*, 95, 13301–13306. [PubMed] [Article] [CrossRef] [PubMed]

*Handbook of perception and human performance: Sensory processes and perception*. (vol. 1, pp. 9.1–9.56). New York: John Wiley & Sons.

*Concepts and methods, quantitative data and formulae*. New York: John Wiley & Sons.

*Color perception of 3D objects: Constancy with respect to variation in surface gloss*.

*Vision Research*, 41, 2581–2600. [PubMed] [CrossRef] [PubMed]

*Paper presented at 12th IS&T/SID Color Imaging Conference*, 308–314, Scottsdale, AZ.