Free
Research Article  |   September 2004
An equivalent illuminant model for the effect of surface slant on perceived lightness
Author Affiliations
Journal of Vision September 2004, Vol.4, 6. doi:https://doi.org/10.1167/4.9.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marina Bloj, Caterina Ripamonti, Kiran Mitha, Robin Hauck, Scott Greenwald, David H. Brainard; An equivalent illuminant model for the effect of surface slant on perceived lightness. Journal of Vision 2004;4(9):6. https://doi.org/10.1167/4.9.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In the companion study (C. Ripamonti et al., 2004), we present data that measure the effect of surface slant on perceived lightness. Observers are neither perfectly lightness constant nor luminance matchers, and there is considerable individual variation in performance. This work develops a parametric model that accounts for how each observer’s lightness matches vary as a function of surface slant. The model is derived from consideration of an inverse optics calculation that could achieve constancy. The inverse optics calculation begins with parameters that describe the illumination geometry. If these parameters match those of the physical scene, the calculation achieves constancy. Deviations in the model’s parameters from those of the scene predict deviations from constancy. We used numerical search to fit the model to each observer’s data. The model accounts for the diverse range of results seen in the experimental data in a unified manner, and examination of its parameters allows interpretation of the data that goes beyond what is possible with the raw data alone.

Introduction
In the companion study (Ripamonti et al., 2004), we report measurements of how perceived surface lightness varies with surface slant. The data indicate that observers take geometry into account when they judge surface lightness, but that there are large individual differences. This work develops a quantitative model of our data. The model is derived from an analysis of the physics of image formation and of the computations that the visual system would have to perform to achieve lightness constancy. The model allows for failures of lightness constancy by supposing that observers do not perfectly estimate the lighting geometry. Individual variation is accounted for within the model by parameters that describe each observer’s representation of that geometry. 
Figure 1 replots experimental data for three observers (HWK, EEP, and FGS) from Ripamonti et al. (2004). Observers matched the lightness of a standard object to a palette of lightness samples, as a function of the slant of the standard object. The data consist of the normalized relative match reflectance at each slant. If the observer had been perfectly lightness constant, the data would fall along a horizontal line, indicated in the plot by the red dashed line. If the observer were making matches by equating the reflected luminance from the standard and palette sample, the data would fall along the blue dashed curves shown in the figure. The complete data set demonstrates reliable individual differences ranging from luminance matches (e.g., HWK) toward approximations of constancy (e.g., FGS). Most of the observers, though, showed intermediate performance (e.g., EEP). 
Figure 1
 
Normalized relative matches, replotted from Ripamonti et al. (2004). Data are for observer HWK (Paint Instructions), observer EEP (Neutral Instructions), and observer FGS (Neutral Instructions). See companion study for experimental details. Blue dashed lines show luminance matching predictions; red dashed lines show lightness constancy predictions.
Figure 1
 
Normalized relative matches, replotted from Ripamonti et al. (2004). Data are for observer HWK (Paint Instructions), observer EEP (Neutral Instructions), and observer FGS (Neutral Instructions). See companion study for experimental details. Blue dashed lines show luminance matching predictions; red dashed lines show lightness constancy predictions.
Given that observers are neither perfectly lightness constant nor luminance matchers, our goal is to develop a parametric model that can account for how each observer’s matches vary as a function of slant. Establishing such a model offers several advantages. First, individual variability may be interpreted in terms of variation in model parameters, rather than in terms of the raw data. Second, once a parametric model is established, one can study how variations in the scene affect the model parameters (cf., Krantz, 1968; Brainard & Wandell, 1992). Ultimately, the goal is to develop a theory that allows prediction of lightness matches across a wide range of scene geometries. 
A number of broad approaches have been used to guide the formulation of quantitative models of context effects. Helmholtz (1896) suggested that perception should be conceived of as a constructed representation of physical reality, with the goal of the construction being to produce stable representations of object properties. The modern instantiation of this idea is often referred to as the computational approach to understanding vision (Marr, 1982; Landy & Movshon, 1991). Under this view, perception is difficult because multiple scene configurations can lead to the same retinal image. In the case of lightness constancy, the ambiguity arises because illuminant intensity and surface reflectance can trade off to leave the intensity of reflected light unchanged. 
Because the retinal image is ambiguous, what we see depends not only on the scene but also on the rules the visual system employs to interpret the image. Various authors choose to formulate the these rules in different ways, with some focusing on constraints imposed by known mechanisms (e.g., Stiles, 1967; Cornsweet, 1970) and others on constraints imposed by the statistical structure of the environment (e.g., Gregory, 1968; Marr, 1982; Landy & Movshon, 1991; Wandell, 1995; Geisler & Kersten, 2002; Purves & Lotto, 2003). 
In previous work, we have elaborated equivalent illuminant models of observer performance for tasks where surface mode or surface color was judged (Speigle & Brainard, 1996; Brainard, Brunt, & Speigle, 1997; see also Brainard, Wandell, & Chichilnisky, 1993; Maloney & Yang, 2001; Boyaci, Maloney, & Hersh, 2003). In such models, the observer is assumed to be correctly performing a constancy computation, with the one exception that their estimate of the illuminant deviates from the actual illuminant. The parameterization of the observer’s illuminant estimate determines the range of performance that may be explained, with the detailed calculation then following from an analysis of the physics of image formation. Here we present an equivalent illuminant model for how perceived lightness varies with surface slant. Our model is essentially identical to that formulated recently by Boyaci et al. (2003). 
Equivalent illuminant model
Overview
Our model is derived from consideration of an inverse optics calculation that could achieve constancy. The inverse optics calculation begins with parameters that describe the illumination geometry. If these parameters match those of the physical scene, the calculation achieves constancy. Deviations in the model’s parameters from those of the scene predict deviations from constancy. In the next sections we describe the physical model of illumination and how this model can be incorporated into an inverse optics calculation to achieve constancy. We then show how the formal development leads to a parametric model of observer performance. 
Physical model
Consider a Lambertian flat matte standard object1 that is illuminated by a point2 directional light source. The standard object is oriented at a slant θN with respect to a reference axis (x-axis in Figure 2). The light source is located at a distance d from the standard surface. The light source azimuth is indicated by θD and the light source declination (with respect to the z-axis) by ϕD
Figure 2
 
Reference system centered on the standard object. The standard object is oriented so that its surface normal forms an angle θN with respect to the x-axis. The light source is located at a distance d from this point, the light source azimuth (with respect to the x-axis) is θD, and the light source declination (with respect to the z-axis) is ϕD.
Figure 2
 
Reference system centered on the standard object. The standard object is oriented so that its surface normal forms an angle θN with respect to the x-axis. The light source is located at a distance d from this point, the light source azimuth (with respect to the x-axis) is θD, and the light source declination (with respect to the z-axis) is ϕD.
The luminance Image not available of the light reflected from the standard surface i depends on its surface reflectance ri, its slant θN, and the intensity of the incident light E:  
(1)
When the light arrives only directly from the source, we can write  
(2)
where  
(3)
Here ID represents the luminous intensity of the light source. Equation 3 applies when Image not available. For a purely directional source and Image not available outside of this range, ED = 0. 
In real scenes, light from a source arrives both directly and after reflection off other objects. For this reason, the incident light E can be described more accurately as a compound quantity made of the contribution of directional light ED and some diffuse light EA. The term EA provides an approximate description of the light reflected off other objects in the scene. We rewrite Equation 2 as  
(4)
and Equation 1 becomes  
(5)
The luminance of the standard surface Image not available reaches its maximum value when Image not available and its minimum when Image not available. In the latter case only the ambient light EA illuminates the standard surface. 
It is useful to simplify Equation 5 by factoring out a multiplicative scale factor α that is independent of θN:  
(6)
In this expression,   is given by   
Physical model fit
How well does the physical model describe the illumination in our apparatus? We measured the luminance of our standard objects under all experimental slants, and averaged these over standard object reflectance. Figure 3 (solid circles) shows the resulting luminances from each experiment of the companion work (Ripamonti et al., 2004) plotted versus the standard object slant. For each experiment, the measurements are normalized to a value of 1 at Image not available. We denote the normalized luminances by Image not available. The solid curves in Figure 3 denote the best fit of Equation 6 to the measurements, where θD, FA and θ were treated as a free parameters and chosen to minimize the mean squared error between model predictions and measured normalized luminances. 
Figure 3
 
The green symbols represent the relative normalized luminance measured for standard objects used in Ripamonti et al. (2004), and the colored curves illustrate the fit of the model described in the text. The top panel corresponds to the light source set-up used in Experiments 1 and 2, middle panel to Experiment 3 light source on the left, and bottom panel for Experiment 3 light source on the right.
Figure 3
 
The green symbols represent the relative normalized luminance measured for standard objects used in Ripamonti et al. (2004), and the colored curves illustrate the fit of the model described in the text. The top panel corresponds to the light source set-up used in Experiments 1 and 2, middle panel to Experiment 3 light source on the left, and bottom panel for Experiment 3 light source on the right.
The fitting procedure returns two estimated parameters of interest: the azimuth θD of the light source and the amount FA of ambient illumination. (The scalar α simply normalizes the predictions in accordance with the normalization of the measurements.) We can represent these parameters in a polar plot, as shown in Figure 4. The azimuthal position of the plotted points represents θD, while the radius v at which the points are plotted is a function of FA:  
(7)
If the light incident on the standard is entirely directional, then the radius of the plotted point will be 1. In the case where the light incident is entirely ambient, the radius will be 0. 
Figure 4
 
Light source position estimates of the physical model. Green lines represent the light source azimuth as measured in the apparatus. In Experiments 1, 2, and 3 (light source on the left), the actual azimuth was θD = −36°. In Experiment 3 (light source on the right), the actual azimuth was θD = 23°. The red symbol represents light source azimuth estimated by the model for Experiments 1 and 2 θD = −25°). For the light source on the left, in Experiment 3, the model estimate is indicated in blue (θD = −30°); for the light source on the right, in purple (θD = 25°). The radius of the plotted points provides information about the relative contributions of directional and ambient illumination to the light incident on the standard object through Equation 7. The radius of the outer circle in the plot is 1. The parameter values obtained for FA are FA = 0.18 (Experiments 1 and 2), FA = 0.43 (Experiment 3, left), and FA = 0.43 (Experiment 3, right).
Figure 4
 
Light source position estimates of the physical model. Green lines represent the light source azimuth as measured in the apparatus. In Experiments 1, 2, and 3 (light source on the left), the actual azimuth was θD = −36°. In Experiment 3 (light source on the right), the actual azimuth was θD = 23°. The red symbol represents light source azimuth estimated by the model for Experiments 1 and 2 θD = −25°). For the light source on the left, in Experiment 3, the model estimate is indicated in blue (θD = −30°); for the light source on the right, in purple (θD = 25°). The radius of the plotted points provides information about the relative contributions of directional and ambient illumination to the light incident on the standard object through Equation 7. The radius of the outer circle in the plot is 1. The parameter values obtained for FA are FA = 0.18 (Experiments 1 and 2), FA = 0.43 (Experiment 3, left), and FA = 0.43 (Experiment 3, right).
The physical model provides a good fit to the dependence of the measured luminances on standard object slant. It should be noted, however, that the recovered azimuth of the directional light source differs from our direct measurement of this azimuth. The most likely source of this discrepancy is that the ambient light arising from reflections off the chamber walls has some directional dependence. This dependence is absorbed into the model’s estimate of θD
Equivalent illuminant model
Suppose an observer has full knowledge of the illumination and scene geometry and wishes to estimate the reflectance of the standard surface from its luminance. From Equation 6 we obtain the estimate  
(8)
We use a tilde to denote perceptual analogs of physical quantities. 
To the extent that the physical model accurately predicts the luminance of the reflected light, Equation 8 predicts that the observer’s estimates of reflectance will be correct and thus Equation 8 predicts lightness constancy. To elaborate Equation 8 into a parametric model that allows failures of constancy, we replace the parameters that describe the illuminant with perceptual estimates of these parameters:  
(9)
where Image not available and Image not available are perceptual analogs of θD and FA. Note that the dependence of Image not available on slant in Equation 9 is independent of ri
Equation 9 predicts an observer’s reflectance estimates as a function of surface slant, given the parameters Image not available and Image not available of the observer’s equivalent illuminant. These parameters describe the illuminant configuration that the observer uses in his or her inverse optics computation. 
Our data analysis procedure aggregates observer matches over standard object reflectance to produce relative normalized matches Image not available. The relative normalized matches describe the overall dependence of observer matches on slant. To link Equation 8 with the data, we assume that the normalized relative matches obtained in our experiment (see “Appendix” of Ripamonti et al., 2004) are proportional to the computed Image not available, leading to the model prediction  
(10)
where β is a constant of proportionality that is determined as part of the model fitting procedure. In Equation 10 we have substituted Image not available for Image not available because the contribution of surface reflectance ri can be absorbed into β
Equation 10 provides a parametric description of how our measurements of perceived lightness should depend on slant. By fitting the model to the measured data, we can evaluate how well the model is able to describe performance, and whether it can capture the individual differences we observe. In fitting the model, the two parameters of interest are Image not available and Image not available, while the parameter β simply accounts for the normalization of the data. 
In generating the model predictions, values for θN and Image not available are taken as veridical physical values. It would be possible to develop a model where these were also treated as perceptual quantities and thus fit to the data. Without constraints on how Image not available and Image not available are related to their physical counterparts, however, allowing these as parameters would lead to excessive degrees of freedom in the model. In our slant matching experiment, observer’s perception of slant was close to veridical and thus using the physical values of θN seems justified. We do not have independent measurements of how the visual system registers luminance. 
Model fit
Fitting the model
For each observer, we used numerical search to fit the model to the data. The search procedure found the equivalent illuminant parameters Image not available (light source azimuth) and Image not available (relative ambient) as well as the overall scaling parameter β that provided the best fit to the data. The best fit was determined as follows. For each of the three sessions k = 1,2,3 we found the normalized relative matches for that session, Image not available. We then found the parameters that minimized the mean squared error between the model’s prediction and these Image not available. The reason for computing the individual session matches and fitting to these, rather than fitting directly to the aggregate Image not available, is that the former procedure allows us to compare the model’s fit to that obtained by fitting the session data at each slant to its own mean. 
Model fit
Model fit results are illustrated in the left hand columns of Figures 5 to 10. The dot symbols are observers’ normalized relative matches and the orange curve in each panel shows the best fit of our model. We also show the predictions for luminance and constancy matches as, respectively, a blue or red dashed line. The right hand columns of Figures 5 to 10 show the model’s Image not available and Image not available for each observer, using the same polar format introduced in Figure 4
Figure 5
 
Model fit to observers’ relative normalized matches. In the left column the green dots represent observers’ relative normalized matches as a function of slant for Experiment 1. Error bars indicate 90% confidence intervals. The orange curve is the model’s best fit for that observer. The blue dashed curve represents predictions for luminance matches and the red dashed line for constancy matches. The right column shows the equivalent illuminant parameters (green symbols) in the same polar format introduced in Figure 4. The polar plot also shows the illuminant parameters obtained by fitting the physical model to the measured luminances (red symbols). The numbers at the top left of each data plot are the error-based constancy index for the observer, while those at the top left of the polar plots are the corresponding model-based index, derived from the equivalent illuminant parameters.
Figure 5
 
Model fit to observers’ relative normalized matches. In the left column the green dots represent observers’ relative normalized matches as a function of slant for Experiment 1. Error bars indicate 90% confidence intervals. The orange curve is the model’s best fit for that observer. The blue dashed curve represents predictions for luminance matches and the red dashed line for constancy matches. The right column shows the equivalent illuminant parameters (green symbols) in the same polar format introduced in Figure 4. The polar plot also shows the illuminant parameters obtained by fitting the physical model to the measured luminances (red symbols). The numbers at the top left of each data plot are the error-based constancy index for the observer, while those at the top left of the polar plots are the corresponding model-based index, derived from the equivalent illuminant parameters.
Figure 6
 
Model fit to observers’ relative normalized matches for Experiment 2. Same format as Figure 5.
Figure 6
 
Model fit to observers’ relative normalized matches for Experiment 2. Same format as Figure 5.
Figure 7
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Neutral Instructions). Same format as Figure 5.
Figure 7
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Neutral Instructions). Same format as Figure 5.
Figure 8
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Neutral Instructions). Same format as Figure 5.
Figure 8
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Neutral Instructions). Same format as Figure 5.
Figure 9
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Paint Instructions). Same format as Figure 5.
Figure 9
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Paint Instructions). Same format as Figure 5.
Figure 10
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Paint Instructions). Same format as Figure 5.
Figure 10
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Paint Instructions). Same format as Figure 5.
With only a few exceptions, the equivalent illuminant model captures the wide range of performance exhibited by individual observers in our experiment. To evaluate the quality of the fit, we can compare the mean squared error for the equivalent illuminant Image not available model to the variability in the data. To make this comparison, we also fit the Image not available at each session and slant by their own means. For each observer, the resulting mean squared error Image not available is a lower bound on the mean squared error that could be obtained by any model. A figure of merit for the equivalent illuminant model is then quantity   This quantity should be near unity if the model fits well, and values greater than unity indicate fit error in units yoked to the precision of the data. Across all our observers and light source positions, the mean value of ηequiv was 1.23, indicating a good but not perfect fit. 
For comparison, we also computed η values associated with four other models. These are 
  •  
    luminance matching: Image not available
  •  
    lightness constancy: Image not available
  •  
    mixture: Image not available
  •  
    quadratic: Image not available The mixture model describes observers whose responses are an additive mixture of luminance matching and lightness constancy matches. If this model fit well, the mixing parameter λ could be interpreted as describing the matching strategy adopted by different observers. The quadratic model has no particular theoretical significance, but has the same number of parameters as our equivalent illuminant model and predicts smoothly varying functions of θN. The dark bars in Figure 11 show the mean η values for all five models. We see that the error for the equivalent illuminant model is lower than that for the four comparison models. This difference is statistically significant at the p < .0001 for all models, as determined by sign test on the η values obtained for each observer/light source position combination.
Figure 11
 
Evaluation of model fits. Dark bars show the mean η values obtained when the matching data for each slant, session, and observer are fitted by the equivalent illuminant model and the four comparison models described in the text. Also shown is the η value when each Image not available is fit by its own mean. This value is labeled Precision and is constrained by the definition of η to be unity. No model can have an η less than unity. Light bars show the cross-validation η values.
Figure 11
 
Evaluation of model fits. Dark bars show the mean η values obtained when the matching data for each slant, session, and observer are fitted by the equivalent illuminant model and the four comparison models described in the text. Also shown is the η value when each Image not available is fit by its own mean. This value is labeled Precision and is constrained by the definition of η to be unity. No model can have an η less than unity. Light bars show the cross-validation η values.
The various models evaluated above have different numbers of parameters. For this reason, it is worth asking whether the equivalent illuminant model performs better simply because it overfits the data. Answering this question is difficult. Selection amongst non-nested and/or non-linear models remains a topic of active investigation (see the following special issue on model selection: Journal of Mathematical Psychology, 2000, 44) and the literature does not yet provide a recipe. Here we adopt a cross-validation approach. 
Our measurements consist of the Image not available measured in three sessions. We selected the data from each possible pair of two sessions and used the result to fit each model. Then for each model and session pair we evaluated how well the model fit the session data that had been excluded from the fitting procedure, using the same η metric described above. The intuition is that a model that overfits the data should generalize poorly and have high cross-validation η values, while a model that captures structure in the data should generalize well and have low cross-validation η values. 
The light bars in Figure 11 show the cross-validation η values we obtained. The equivalent illuminant model continues to perform best. Note that the cross-validation η value obtained when the data for each session is predicted from the mean of the other two sessions (labeled “Precision”) is higher than that obtained for the equivalent illuminant model. This difference is statistically significant (sign test, p < .005). 
Although the equivalent illuminant model provides the best fit among those we examined, it does not account for all of the systematic structure in the data. ANOVAs conducted on the model residuals indicated that these depend on surface slant in a statistically significant manner for several of our conditions (Experiment 1, p = .14; Experiment 2, p = .14; Experiment 3 Left Neutral, p < .005; Experiment 3 Right Neutral, p < .005; Experiment 3 Left Paint, p < .1; Experiment 3 Right Paint, p < .005). The systematic nature of the residuals was more salient for all four of the comparison models (p < .001 for all models/conditions) than for the equivalent illuminant model. 
Discussion
Using the model
The equivalent illuminant allows interpretation of the large individual differences observed in our experiments. In the context of the model, these differences are revealed as variation in the equivalent illuminant model parameters Image not available and Image not available, rather than as a qualitative difference in the manner in which observers perform the matching task. In the polar plots we see that for each condition, the equivalent illuminant model parameters lie roughly between the origin and the corresponding physical illuminant parameters. Observers whose data resemble luminance matching have parameters that plot close to the origin, while those whose data resemble constancy matching have parameters that plot close to those of the physical illuminant. This pattern in the data reflects the fact that observers’ performance lies between that of luminance matching and lightness constancy. The fact that many observers have illuminant parameters that differ from the corresponding physical values could be interpreted as an indication of the computational difficulty of estimating light source position and relative ambient from image data. 
Various patterns in the raw data shown by many observers, particularly the sharp drop in match for θN = 60° when the light is on the left and the non-monotonic nature of the matches with increasing slant, require no special explanation in the context of the equivalent illuminant model. Both of these patterns are predicted by the model for reasonable values of the parameters. Indeed, striking to us was the richness of the model’s predictions for relatively small changes in parameter values. 
A question of interest in Experiment 3 was whether observers are sensitive to the actual position of the light source. Comparison of Image not available across changes in the light source position indicates that they are. The average value of Image not available when the light source was on the left in Experiment 3 was −35°, compared to 16° when it was on the right. The shift in equivalent illuminant azimuth of 51° is comparable to the corresponding shift in the physical model parameter (55°). 
Model-based constancy index
In the companion study, we developed a constancy index based on comparing the fit error for luminance matching and constancy. Such indices provide a summary of what the data imply about lightness constancy. At the same time, any given constancy index is of necessity somewhat arbitrary. It is therefore of interest to derive a model-based constancy index and compare it with the error-based index. 
Let the vector  
(11)
be a function of the physical model’s parameters θD and FA, with the scalar v computed from FA using Equation 7 above. Let the vector Image not available be the analogous vector computed from the equivalent illuminant model parameters Image not available and Image not available. Then we define the model based constancy index as  
(12)
This index takes on a value of 1 when the equivalent illuminant model parameters match the physical model parameters and a value near 0 when the equivalent illuminant model parameter Image not available is very large. This latter case corresponds to where the model predicts luminance matching. 
We have computed this CIm for each observer/condition, and the resulting values are indicated on the top left of each polar plot in Figures 510. The model based constancy index ranges from 0.23 to 0.91, with a mean of 0.57, a median of 0.57. These values are larger than those obtained with the error based index (mean/median 0.40). Figure 12 shows a scatter plot of the two indices, which are correlated at r = 0.73. The discrepancy between the two indices provides a sense of the precision with which they should be interpreted. Given the computational difficulty of recovering lighting geometry from images, we regard the average degree of constancy shown by the observers (∼0.40 – ∼0.57) as a fairly impressive achievement. The large individual variability in performance remains clear in Figure 12
Figure 12
 
Scatter plot of error-based versus model-based constancy indices. Each point represents the two indices of one observer. For Experiment 3, indices for left and right light source positions are plotted separately.
Figure 12
 
Scatter plot of error-based versus model-based constancy indices. Each point represents the two indices of one observer. For Experiment 3, indices for left and right light source positions are plotted separately.
Interpreting the model parameters
The equivalent illuminant model has two parameters, Image not available and Image not available, that describe the lighting geometry. These parameters are not, however, set by measurements of the physical lighting geometry but are fit to each observer’s data. Given the equivalent illuminant parameters, the model predicts the lightness matches through an inverse optics calculation. 
It is tempting to associate the parameters Image not available and Image not available with observers’ consciously accessible estimates of the illumination geometry. Because our experiments do not explicitly measure this aspect of perception, we have no empirical basis for making the association. In interpreting the parameters as observer estimates of the illuminant, it is important to bear in mind that they are derived from surface lightness matching data, and thus, at present, should be treated as illuminant estimates only in the context of our model of surface lightness. It is possible that a future explicit comparison could tighten the link between the derived parameters and conscious perception of the illuminant. Prior attempts to make such links between implicit and explicit illumination perception, however, have not led to positive results (see e.g., Rutherford & Brainard, 2002). 
Independent of the connection between model parameters and explicitly judged illumination properties, equivalent illuminant models are valuable to the extent (a) that the provide a parsimonious account of rich data sets and (b) that their parameters can be predicted by computational algorithms that estimate illuminant properties (e.g., Brainard, Kraft, & Longère, 2003.; Brainard et al., 2004). As computational algorithms for estimating illumination geometry become available, our hope is that these may be used in conjunction with the type equivalent illuminant model presented here to predict perceived surface lightness directly from the image data. 
Acknowledgments
This work was supported National Institutes of Health Grant EY 10016. We thank B. Backus, H. Boyaci, L. Maloney, R. Murray, J. Nachmias, and S. Sternberg for helpful discussions. 
Commercial relationships: none. 
Corresponding author: David Brainard. 
Address: Department of Psychology, University of Pennsylvania, Suite 302C, 3401Walnut Street, Philadelphia, PA 19104. 
Footnote
Footnotes
1  A Lambertian surface is a uniformly diffusing surface with constant luminance regardless of the direction from which it is viewed.
Footnotes
2  A light source whose distance from the illuminated object is at least 5 times its main dimension is considered to be a good approximation of a point light source (Kaufman & Christensen, 1972).
References
Boyaci, H. Maloney, L. T. Hersh, S (2003). The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. Journal of Vision, 3(2), 541–553, http://journalofvision.org/3/8/2/, doi:10.1167/3.8.2. [PubMed][Article] [PubMed]
Brainard, D. H. Brunt, W. A. Speigle, J. M. (1997). Color constancy in the nearly natural image. 1. Asymmetric matches. Journal of the Optical Society of America A, 14, 2091–2110. [PubMed] [CrossRef]
Brainard, D. H. Kraft, J. M. Longère, P. (2003). Color constancy: Developing empirical tests of computational models. In R., Mausfeld D., Heyer (Eds.), Colour perception: Mind and the physical world (pp. 307–334). Oxford: Oxford University Press.
Brainard, D. H. Longere, P. Kraft, J. M. Delahunt, P. B. Freeman, W. T. Xiao, B. (2004). Computational models of human color constancy. Paper presented at the Proceedings of the Meeting on Computational & Systems Neuroscience, Cold Spring Harbor Laboratories, New York.
Brainard, D. H. Wandell, B. A. (1992). Asymmetric color-matching: How color appearance depends on the illuminant. Journal of the Optical Society of America A, 9(9), 1433–1448. [PubMed] [CrossRef]
Brainard, D. H. Wandell, B. A. Chichilnisky, E. -J. (1993). Color constancy: From physics to appearance. Current Directions in Psychological Science, 2, 165–170. [CrossRef]
Cornsweet, T. N. (1970). Visual Perception. New York: Academic Press.
Geisler, W. S. Kersten, D. (2002). Illusions, perception, and Bayes. Nature Neuroscience, 5, 508–510. [PubMed] [CrossRef] [PubMed]
Gregory, R. L. (1968). Perceptual illusions and brain models. Proceedings of the Royal Society of London B, 171, 179–196. [PubMed] [CrossRef]
Helmholtz, H. (1896). Physiological optics. New York: Dover Publications, Inc.
Kaufman, J. E. Christensen, J. F. (Eds.) (1972). IES lighting handbook; The standard lighting guide (5 ed.). New York: Illuminating Engineering Society.
Krantz, D. (1968). A theory of context effects based on cross-context matching. Journal of Mathematical Psychology, 5, 1–48. [CrossRef]
Landy, M. S. Movshon, J. A. (Eds.) (1991). Computational models of visual processing. Cambridge, MA: MIT Press.
Maloney, L. T. Yang, J. N. (2001). The illuminant estimation hypothesis and surface color perception. In R., Mausfeld D., Heyer (Eds.), Colour perception: From light to object. Oxford: Oxford University Press.
Marr, D. (1982). Vision. San Francisco: W. H. Freeman.
Purves, D. Lotto, R. B. (2003). Why we see what we do: An empirical theory of vision. Sunderland, MA: Sinauer.
Ripamonti, C. Bloj, M. Mitha, K. Greenwald, S. Hauck, R. Maloney, S. I. Brainard, D. H. (2004). Measurements of the effect of surface slant on perceived lightness. Journal of Vision, 4(9), 747–763, http://journalofvision.org/4/9/7/, doi:10.1167/4.9.7. [PubMed][Article] [CrossRef] [PubMed]
Rutherford, M. D. Brainard, D. H. (2002). Lightness constancy: A direct test of the illumination estimation hypothesis. Psychological Science, 13, 142–149. [PubMed] [CrossRef] [PubMed]
Speigle, J. M. Brainard, D. H. (1996). Luminosity thresholds: Effects of test chromaticity and ambient illumination. Journal of the Optical Society of America A, 13(3), 436–451. [PubMed] [CrossRef]
Stiles, W. S. (1967). Mechanism concepts in colour theory. Journal of the Colour Group, 11, 106–123.
Wandell, B. A. (1995). Foundations of vision. Sunderland, MA: Sinauer.
Figure 1
 
Normalized relative matches, replotted from Ripamonti et al. (2004). Data are for observer HWK (Paint Instructions), observer EEP (Neutral Instructions), and observer FGS (Neutral Instructions). See companion study for experimental details. Blue dashed lines show luminance matching predictions; red dashed lines show lightness constancy predictions.
Figure 1
 
Normalized relative matches, replotted from Ripamonti et al. (2004). Data are for observer HWK (Paint Instructions), observer EEP (Neutral Instructions), and observer FGS (Neutral Instructions). See companion study for experimental details. Blue dashed lines show luminance matching predictions; red dashed lines show lightness constancy predictions.
Figure 2
 
Reference system centered on the standard object. The standard object is oriented so that its surface normal forms an angle θN with respect to the x-axis. The light source is located at a distance d from this point, the light source azimuth (with respect to the x-axis) is θD, and the light source declination (with respect to the z-axis) is ϕD.
Figure 2
 
Reference system centered on the standard object. The standard object is oriented so that its surface normal forms an angle θN with respect to the x-axis. The light source is located at a distance d from this point, the light source azimuth (with respect to the x-axis) is θD, and the light source declination (with respect to the z-axis) is ϕD.
Figure 3
 
The green symbols represent the relative normalized luminance measured for standard objects used in Ripamonti et al. (2004), and the colored curves illustrate the fit of the model described in the text. The top panel corresponds to the light source set-up used in Experiments 1 and 2, middle panel to Experiment 3 light source on the left, and bottom panel for Experiment 3 light source on the right.
Figure 3
 
The green symbols represent the relative normalized luminance measured for standard objects used in Ripamonti et al. (2004), and the colored curves illustrate the fit of the model described in the text. The top panel corresponds to the light source set-up used in Experiments 1 and 2, middle panel to Experiment 3 light source on the left, and bottom panel for Experiment 3 light source on the right.
Figure 4
 
Light source position estimates of the physical model. Green lines represent the light source azimuth as measured in the apparatus. In Experiments 1, 2, and 3 (light source on the left), the actual azimuth was θD = −36°. In Experiment 3 (light source on the right), the actual azimuth was θD = 23°. The red symbol represents light source azimuth estimated by the model for Experiments 1 and 2 θD = −25°). For the light source on the left, in Experiment 3, the model estimate is indicated in blue (θD = −30°); for the light source on the right, in purple (θD = 25°). The radius of the plotted points provides information about the relative contributions of directional and ambient illumination to the light incident on the standard object through Equation 7. The radius of the outer circle in the plot is 1. The parameter values obtained for FA are FA = 0.18 (Experiments 1 and 2), FA = 0.43 (Experiment 3, left), and FA = 0.43 (Experiment 3, right).
Figure 4
 
Light source position estimates of the physical model. Green lines represent the light source azimuth as measured in the apparatus. In Experiments 1, 2, and 3 (light source on the left), the actual azimuth was θD = −36°. In Experiment 3 (light source on the right), the actual azimuth was θD = 23°. The red symbol represents light source azimuth estimated by the model for Experiments 1 and 2 θD = −25°). For the light source on the left, in Experiment 3, the model estimate is indicated in blue (θD = −30°); for the light source on the right, in purple (θD = 25°). The radius of the plotted points provides information about the relative contributions of directional and ambient illumination to the light incident on the standard object through Equation 7. The radius of the outer circle in the plot is 1. The parameter values obtained for FA are FA = 0.18 (Experiments 1 and 2), FA = 0.43 (Experiment 3, left), and FA = 0.43 (Experiment 3, right).
Figure 5
 
Model fit to observers’ relative normalized matches. In the left column the green dots represent observers’ relative normalized matches as a function of slant for Experiment 1. Error bars indicate 90% confidence intervals. The orange curve is the model’s best fit for that observer. The blue dashed curve represents predictions for luminance matches and the red dashed line for constancy matches. The right column shows the equivalent illuminant parameters (green symbols) in the same polar format introduced in Figure 4. The polar plot also shows the illuminant parameters obtained by fitting the physical model to the measured luminances (red symbols). The numbers at the top left of each data plot are the error-based constancy index for the observer, while those at the top left of the polar plots are the corresponding model-based index, derived from the equivalent illuminant parameters.
Figure 5
 
Model fit to observers’ relative normalized matches. In the left column the green dots represent observers’ relative normalized matches as a function of slant for Experiment 1. Error bars indicate 90% confidence intervals. The orange curve is the model’s best fit for that observer. The blue dashed curve represents predictions for luminance matches and the red dashed line for constancy matches. The right column shows the equivalent illuminant parameters (green symbols) in the same polar format introduced in Figure 4. The polar plot also shows the illuminant parameters obtained by fitting the physical model to the measured luminances (red symbols). The numbers at the top left of each data plot are the error-based constancy index for the observer, while those at the top left of the polar plots are the corresponding model-based index, derived from the equivalent illuminant parameters.
Figure 6
 
Model fit to observers’ relative normalized matches for Experiment 2. Same format as Figure 5.
Figure 6
 
Model fit to observers’ relative normalized matches for Experiment 2. Same format as Figure 5.
Figure 7
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Neutral Instructions). Same format as Figure 5.
Figure 7
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Neutral Instructions). Same format as Figure 5.
Figure 8
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Neutral Instructions). Same format as Figure 5.
Figure 8
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Neutral Instructions). Same format as Figure 5.
Figure 9
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Paint Instructions). Same format as Figure 5.
Figure 9
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the left, Paint Instructions). Same format as Figure 5.
Figure 10
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Paint Instructions). Same format as Figure 5.
Figure 10
 
Model fit to observers’ relative normalized matches for Experiment 3 (light on the right, Paint Instructions). Same format as Figure 5.
Figure 11
 
Evaluation of model fits. Dark bars show the mean η values obtained when the matching data for each slant, session, and observer are fitted by the equivalent illuminant model and the four comparison models described in the text. Also shown is the η value when each Image not available is fit by its own mean. This value is labeled Precision and is constrained by the definition of η to be unity. No model can have an η less than unity. Light bars show the cross-validation η values.
Figure 11
 
Evaluation of model fits. Dark bars show the mean η values obtained when the matching data for each slant, session, and observer are fitted by the equivalent illuminant model and the four comparison models described in the text. Also shown is the η value when each Image not available is fit by its own mean. This value is labeled Precision and is constrained by the definition of η to be unity. No model can have an η less than unity. Light bars show the cross-validation η values.
Figure 12
 
Scatter plot of error-based versus model-based constancy indices. Each point represents the two indices of one observer. For Experiment 3, indices for left and right light source positions are plotted separately.
Figure 12
 
Scatter plot of error-based versus model-based constancy indices. Each point represents the two indices of one observer. For Experiment 3, indices for left and right light source positions are plotted separately.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×