In the companion study (C. Ripamonti et al., 2004), we present data that measure the effect of surface slant on perceived lightness. Observers are neither perfectly lightness constant nor luminance matchers, and there is considerable individual variation in performance. This work develops a parametric model that accounts for how each observer’s lightness matches vary as a function of surface slant. The model is derived from consideration of an inverse optics calculation that could achieve constancy. The inverse optics calculation begins with parameters that describe the illumination geometry. If these parameters match those of the physical scene, the calculation achieves constancy. Deviations in the model’s parameters from those of the scene predict deviations from constancy. We used numerical search to fit the model to each observer’s data. The model accounts for the diverse range of results seen in the experimental data in a unified manner, and examination of its parameters allows interpretation of the data that goes beyond what is possible with the raw data alone.

*equivalent illuminant*models of observer performance for tasks where surface mode or surface color was judged (Speigle & Brainard, 1996; Brainard, Brunt, & Speigle, 1997; see also Brainard, Wandell, & Chichilnisky, 1993; Maloney & Yang, 2001; Boyaci, Maloney, & Hersh, 2003). In such models, the observer is assumed to be correctly performing a constancy computation, with the one exception that their estimate of the illuminant deviates from the actual illuminant. The parameterization of the observer’s illuminant estimate determines the range of performance that may be explained, with the detailed calculation then following from an analysis of the physics of image formation. Here we present an equivalent illuminant model for how perceived lightness varies with surface slant. Our model is essentially identical to that formulated recently by Boyaci et al. (2003).

*θ*

_{N}with respect to a reference axis (

*x*-axis in Figure 2). The light source is located at a distance

*d*from the standard surface. The light source azimuth is indicated by

*θ*

_{D}and the light source declination (with respect to the

*z*-axis) by

*ϕ*

_{D}.

*i*depends on its surface reflectance

*r*

_{i}, its slant

*θ*

_{N}, and the intensity of the incident light

_{E}: When the light arrives only directly from the source, we can write where Here

*I*

_{D}represents the luminous intensity of the light source. Equation 3 applies when . For a purely directional source and outside of this range,

*E*

_{D}= 0.

*E*can be described more accurately as a compound quantity made of the contribution of directional light

*E*

_{D}and some diffuse light

*E*

_{A}. The term

*E*

_{A}provides an approximate description of the light reflected off other objects in the scene. We rewrite Equation 2 as and Equation 1 becomes The luminance of the standard surface reaches its maximum value when and its minimum when . In the latter case only the ambient light

*E*

_{A}illuminates the standard surface.

*α*that is independent of

*θ*

_{N}: In this expression, is given by

*θ*

_{D},

*F*

_{A}and

*θ*were treated as a free parameters and chosen to minimize the mean squared error between model predictions and measured normalized luminances.

*θ*

_{D}of the light source and the amount

*F*

_{A}of ambient illumination. (The scalar

*α*simply normalizes the predictions in accordance with the normalization of the measurements.) We can represent these parameters in a polar plot, as shown in Figure 4. The azimuthal position of the plotted points represents

*θ*

_{D}, while the radius

*v*at which the points are plotted is a function of

*F*

_{A}: If the light incident on the standard is entirely directional, then the radius of the plotted point will be 1. In the case where the light incident is entirely ambient, the radius will be 0.

*θ*

_{D}.

*θ*

_{D}and

*F*

_{A}. Note that the dependence of on slant in Equation 9 is independent of

*r*

_{i}.

*equivalent illuminant*. These parameters describe the illuminant configuration that the observer uses in his or her inverse optics computation.

*β*is a constant of proportionality that is determined as part of the model fitting procedure. In Equation 10 we have substituted for because the contribution of surface reflectance

*r*

_{i}can be absorbed into

*β*.

*β*simply accounts for the normalization of the data.

*θ*

_{N}and are taken as veridical physical values. It would be possible to develop a model where these were also treated as perceptual quantities and thus fit to the data. Without constraints on how and are related to their physical counterparts, however, allowing these as parameters would lead to excessive degrees of freedom in the model. In our slant matching experiment, observer’s perception of slant was close to veridical and thus using the physical values of

*θ*

_{N}seems justified. We do not have independent measurements of how the visual system registers luminance.

*β*that provided the best fit to the data. The best fit was determined as follows. For each of the three sessions

*k*= 1,2,3 we found the normalized relative matches for that session, . We then found the parameters that minimized the mean squared error between the model’s prediction and these . The reason for computing the individual session matches and fitting to these, rather than fitting directly to the aggregate , is that the former procedure allows us to compare the model’s fit to that obtained by fitting the session data at each slant to its own mean.

*η*

_{equiv}was 1.23, indicating a good but not perfect fit.

*η*values associated with four other models. These are

- luminance matching:
- lightness constancy:
- mixture:
- quadratic: The mixture model describes observers whose responses are an additive mixture of luminance matching and lightness constancy matches. If this model fit well, the mixing parameter
*λ*could be interpreted as describing the matching strategy adopted by different observers. The quadratic model has no particular theoretical significance, but has the same number of parameters as our equivalent illuminant model and predicts smoothly varying functions of*θ*_{N}. The dark bars in Figure 11 show the mean*η*values for all five models. We see that the error for the equivalent illuminant model is lower than that for the four comparison models. This difference is statistically significant at the*p*< .0001 for all models, as determined by sign test on the*η*values obtained for each observer/light source position combination.

*Journal of Mathematical Psychology*, 2000,

*44*) and the literature does not yet provide a recipe. Here we adopt a cross-validation approach.

*η*metric described above. The intuition is that a model that overfits the data should generalize poorly and have high cross-validation

*η*values, while a model that captures structure in the data should generalize well and have low cross-validation

*η*values.

*η*values we obtained. The equivalent illuminant model continues to perform best. Note that the cross-validation

*η*value obtained when the data for each session is predicted from the mean of the other two sessions (labeled “Precision”) is higher than that obtained for the equivalent illuminant model. This difference is statistically significant (sign test,

*p*< .005).

*p*= .14; Experiment 2,

*p*= .14; Experiment 3 Left Neutral,

*p*< .005; Experiment 3 Right Neutral,

*p*< .005; Experiment 3 Left Paint,

*p*< .1; Experiment 3 Right Paint,

*p*< .005). The systematic nature of the residuals was more salient for all four of the comparison models (

*p*< .001 for all models/conditions) than for the equivalent illuminant model.

*θ*

_{N}= 60° when the light is on the left and the non-monotonic nature of the matches with increasing slant, require no special explanation in the context of the equivalent illuminant model. Both of these patterns are predicted by the model for reasonable values of the parameters. Indeed, striking to us was the richness of the model’s predictions for relatively small changes in parameter values.

*θ*

_{D}and

*F*

_{A}, with the scalar

*v*computed from

*F*

_{A}using Equation 7 above. Let the vector be the analogous vector computed from the equivalent illuminant model parameters and . Then we define the model based constancy index as This index takes on a value of 1 when the equivalent illuminant model parameters match the physical model parameters and a value near 0 when the equivalent illuminant model parameter is very large. This latter case corresponds to where the model predicts luminance matching.

*CI*

_{m}for each observer/condition, and the resulting values are indicated on the top left of each polar plot in Figures 5–10. The model based constancy index ranges from 0.23 to 0.91, with a mean of 0.57, a median of 0.57. These values are larger than those obtained with the error based index (mean/median 0.40). Figure 12 shows a scatter plot of the two indices, which are correlated at

*r*= 0.73. The discrepancy between the two indices provides a sense of the precision with which they should be interpreted. Given the computational difficulty of recovering lighting geometry from images, we regard the average degree of constancy shown by the observers (∼0.40 – ∼0.57) as a fairly impressive achievement. The large individual variability in performance remains clear in Figure 12.

^{2}A light source whose distance from the illuminated object is at least 5 times its main dimension is considered to be a good approximation of a point light source (Kaufman & Christensen, 1972).

*Journal of Vision*, 3(2), 541–553, http://journalofvision.org/3/8/2/, doi:10.1167/3.8.2. [PubMed][Article] [PubMed]

*Journal of the Optical Society of America A*, 14, 2091–2110. [PubMed] [CrossRef]

*Colour perception: Mind and the physical world*(pp. 307–334). Oxford: Oxford University Press.

*Proceedings of the Meeting on Computational & Systems Neuroscience, Cold Spring Harbor Laboratories, New York*.

*Journal of the Optical Society of America A*, 9(9), 1433–1448. [PubMed] [CrossRef]

*Current Directions in Psychological Science*, 2, 165–170. [CrossRef]

*Visual Perception*. New York: Academic Press.

*Nature Neuroscience*, 5, 508–510. [PubMed] [CrossRef] [PubMed]

*Proceedings of the Royal Society of London B*, 171, 179–196. [PubMed] [CrossRef]

*Physiological optics*. New York: Dover Publications, Inc.

*IES lighting handbook; The standard lighting guide*(5 ed.). New York: Illuminating Engineering Society.

*Journal of Mathematical Psychology*, 5, 1–48. [CrossRef]

*Computational models of visual processing*. Cambridge, MA: MIT Press.

*Colour perception: From light to object*. Oxford: Oxford University Press.

*Vision*. San Francisco: W. H. Freeman.

*Why we see what we do: An empirical theory of vision*. Sunderland, MA: Sinauer.

*Journal of Vision*, 4(9), 747–763, http://journalofvision.org/4/9/7/, doi:10.1167/4.9.7. [PubMed][Article] [CrossRef] [PubMed]

*Psychological Science*, 13, 142–149. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 13(3), 436–451. [PubMed] [CrossRef]

*Journal of the Colour Group*, 11, 106–123.

*Foundations of vision*. Sunderland, MA: Sinauer.