Free
Article  |   December 2013
Edge integration in achromatic color perception and the lightness–darkness asymmetry
Author Affiliations
  • Michael E. Rudd
    Howard Hughes Medical Institute, Department of Physiology and Biophysics, University of Washington School of Medicine, Seattle, WA, USA
    mrudd@u.washington.edu
Journal of Vision December 2013, Vol.13, 18. doi:https://doi.org/10.1167/13.14.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael E. Rudd; Edge integration in achromatic color perception and the lightness–darkness asymmetry. Journal of Vision 2013;13(14):18. https://doi.org/10.1167/13.14.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  To maintain color constancy, the human visual system must distinguish surface reflectance-based variations in wavelength and luminance from variations due to illumination. Edge integration theory proposes that this is accomplished by spatially integrating steps in luminance and color contrast that likely result from reflectance changes. Thus, a neural representation of relative reflectance within the visual scene is constructed. An anchoring rule—the largest reflectance in the neural representation appears white—is then applied to map relative lightness onto an absolute lightness scale. A large body of data on human lightness judgments is here shown to be consistent with an edge integration model in which the visual system performs a weighted sum of steps in log luminance across space. Three hypotheses are proposed regarding how weights are applied to edges. First, weights decline with distance from the target surface whose lightness is being computed. Second, larger weights are given to edges whose dark sides point towards the target. Third, edge integration is carried out along a path leading from a common background field, or surround, to the target location. The theory accounts for simultaneous contrast; quantitative lightness judgments made with classical disk-annulus, Gilchrist dome, and Gelb displays; and perceptual filling-in lightness. A cortical theory of lightness in the ventral stream of visual cortex (areas V1 → V4) is proposed to instantiate the edge integration algorithm. The neural model is shown to be capable of unifying the quantitative laws of edge integration in lightness perception with the laws governing brightness, including Stevens' power law brightness model, and makes novel predictions about the quantitative laws governing induced darkness.

Introduction
When we look at the world around us, we see a visual environment composed largely of illuminated surfaces. Information about the properties of those surfaces is delivered to the eye in the form of light that is either reflected from or transmitted through the surfaces. Nevertheless, the basic problem of estimating surface material properties from the retinal image is ill-posed because the image that the eye sees is determined partly by the material properties of the surfaces and partly by the nature of the light that illuminates the surfaces. To disentangle the separate causal effects of reflectance and illumination in producing the retinal image is arguably the fundamental problem of color vision. To the extent that the visual system is able to “discount the illuminant” (von Helmholtz, 1866/1924) in order to get at information about surface reflectances, we say that the visual system exhibits “color constancy.” Recent data suggest that color constancy holds to a large degree in human vision (Brainard, 2004), but the neural mechanisms by which constancy is achieved are still largely unknown. 
One of the first people to consider the problem of color constancy from the viewpoint of neural computation was Edwin Land, founder of the Polaroid Corporation. In the 1950s, Land devised a demonstration of color constancy that appeared to him to be at odds with the then-dominant thinking about human color vision. To understand the importance of Land's work, it helps to first consider a case in which constancy is not achieved by our visual system. In an otherwise dark room, a white surface is illuminated by a monochromatic blue, green, or red spotlight (Figure 1a). An observer who is asked to report the color of the surface will most likely say that it is either blue, green, or red, depending on which of the three monochromatic illuminants is used. In other words, the subject's judgment of surface color completely tracks the color of the illuminant, rather than the reflectance properties of the surface. This is the opposite of color constancy: a complete failure of constancy. 
Figure 1
 
Example of the effect of spatial context on surface color perception. (a) A white disk, which reflects light of all wavelengths equally, appears to have the color of a hidden monochromatic illuminant when viewed in an otherwise dark room. (b) Land's experiment: an array of papers of various colors is illuminated homogeneously by a combination of three narrowband short-, medium-, and long-wavelength lights. As long as all three lights are turned on, the amplitudes of the lights can be changed without greatly affecting the color appearance of the papers (color constancy).
Figure 1
 
Example of the effect of spatial context on surface color perception. (a) A white disk, which reflects light of all wavelengths equally, appears to have the color of a hidden monochromatic illuminant when viewed in an otherwise dark room. (b) Land's experiment: an array of papers of various colors is illuminated homogeneously by a combination of three narrowband short-, medium-, and long-wavelength lights. As long as all three lights are turned on, the amplitudes of the lights can be changed without greatly affecting the color appearance of the papers (color constancy).
If we now replace the single white surface with a hodge-podge of surfaces whose actual colored paint jobs differ (Figure 1b), and illuminate the entire set of surfaces with a single illuminant composed of a weighted sum of three monochromatic primaries spanning the range of wavelengths to which our three cone types are sensitive, then a very different outcome is observed. Now the intensities of the three primaries can be varied substantially—so that the illuminant itself varies greatly in hue—and the color appearance of the individual surfaces within the hodge-podge will be little affected by the illumination changes. Apparently, increasing the complexity of the illuminated surfaces changes the situation from one in which the visual system exhibits no color constancy to one in which color constancy is largely manifest. It is this second experiment that Land performed. 
Despite much subsequent work on the color constancy problem, we still do not know how the visual nervous system achieves near constancy even in the relatively simple case of Land's demonstration. One thing that is clear is that color of any given surface within the scene is somehow neurally “computed” by comparing the retinal data associated with that surface (its overall luminance, wavelength composition, etc.) with the data provided by other scene surfaces. This referencing of surface appearance to the other surfaces in the scene serves to stabilize the perceived color of the target surface under variations in illumination. 
To account for his experimental results, Land and his colleague John McCann proposed a theory that they called “retinex” because it was inspired by ideas about neural processing in the retina and cortex as they were understood at the time that the theory was introduced (Land & McCann, 1971; see also Land, 1977, 1983, 1986a, 1986b). The stimuli that retinex was designed to take as its input are made up of collections of surfaces with arbitrary reflectances and shapes, all lying in the same two-dimensional plane (see Figures 1b and 2). The reflectance of each surface is uniform. The surfaces are delimited by hard edges. The illumination is assumed to be uniform, but its overall intensity is free to vary. Land and McCann referred to such stimuli as “mondrian” patterns after the Dutch painter Piet Mondrian, whose paintings they were thought to resemble (Land, 1964). In what follows, I will restrict my further discussion of retinex to achromatic stimuli because the focus of the paper is on modeling lightness perception. 
Figure 2
 
How retinex works. The input is an achromatic Land “mondrian” pattern: a collection of papers having arbitrary shapes and a range of gray-scale reflectances, all lit by the same homogenous illuminant, and separated by hard edges. The goal is to assign lightness values (i.e., reflectance estimates) to all of the papers in the mondrian. (a) To compute the relative reflectances of any two papers in the mondrian—for example, the dark gray (9% reflectance) and white (90% reflectance) papers in the illustration—retinex first computes the local luminance ratio at an edge lying along a path between the two papers: for example, the edge between the light gray (45% reflectance) paper and the white paper. Because the mondrian papers are all viewed under the same illuminant, the luminance ratio at this edge is the same as the reflectance ratio of the two papers. (b) Other luminance ratios lying along the path connecting the target papers are similarly computed: for example, the luminance ratio between the light gray (45% reflectance) and dark gray (9% reflectance) papers. The relative reflectance of the two target papers is computed by multiplying the local luminance ratios encountered along the path connecting the two papers, whether or not the papers are contiguous. Absolute lightness judgments are arrived at by arbitrarily assigning the value “white” (a presumed reflectance value of 90%) to the mondrian paper or papers with the highest luminance, then computing reflectance estimates for the other mondrian papers from each paper's reflectance ratio relative to the highest luminance paper or papers, as determined by the chain multiplication of local luminance ratios. The retinex procedure may err in its estimation of absolute reflectance, but it gets the reflectance ratios of the mondrian papers right. Land and McCann (1971) suggested that the computation of local luminance ratios might be carried out by “edge detector” neurons (Hubel and Wiesel cells) in visual cortex. The receptive fields of such neurons are illustrated in cartoon form by the oriented blue icons in the figure.
Figure 2
 
How retinex works. The input is an achromatic Land “mondrian” pattern: a collection of papers having arbitrary shapes and a range of gray-scale reflectances, all lit by the same homogenous illuminant, and separated by hard edges. The goal is to assign lightness values (i.e., reflectance estimates) to all of the papers in the mondrian. (a) To compute the relative reflectances of any two papers in the mondrian—for example, the dark gray (9% reflectance) and white (90% reflectance) papers in the illustration—retinex first computes the local luminance ratio at an edge lying along a path between the two papers: for example, the edge between the light gray (45% reflectance) paper and the white paper. Because the mondrian papers are all viewed under the same illuminant, the luminance ratio at this edge is the same as the reflectance ratio of the two papers. (b) Other luminance ratios lying along the path connecting the target papers are similarly computed: for example, the luminance ratio between the light gray (45% reflectance) and dark gray (9% reflectance) papers. The relative reflectance of the two target papers is computed by multiplying the local luminance ratios encountered along the path connecting the two papers, whether or not the papers are contiguous. Absolute lightness judgments are arrived at by arbitrarily assigning the value “white” (a presumed reflectance value of 90%) to the mondrian paper or papers with the highest luminance, then computing reflectance estimates for the other mondrian papers from each paper's reflectance ratio relative to the highest luminance paper or papers, as determined by the chain multiplication of local luminance ratios. The retinex procedure may err in its estimation of absolute reflectance, but it gets the reflectance ratios of the mondrian papers right. Land and McCann (1971) suggested that the computation of local luminance ratios might be carried out by “edge detector” neurons (Hubel and Wiesel cells) in visual cortex. The receptive fields of such neurons are illustrated in cartoon form by the oriented blue icons in the figure.
To compute the lightness of any given patch within a mondrian, retinex begins by computing the luminance ratio at an edge between the target patch and one of its neighboring patches (Figure 2a). The luminance of the target patch can then be unambiguously related to that of any other mondrian patch—whether or not that target and comparison patches are contiguous—by multiplying the luminance ratio at the patch edge by the luminance ratios encountered along a path linking the target and comparison patches (Figure 2b). The resulting chain multiplication is represented symbolically by Equation 1:  where the patch index 1 corresponds to the target, the index n corresponds to the comparison patch, and the indices between 1 and n correspond to the patches lying between the target and comparison patches long the path in which the chain multiplication is performed. 
Land and McCann argued for the neural plausibility of the idea that lightness computation begins with a neural measure of local luminance ratios at edges on the basis of then-contemporary finding that cells in the retina and cortex compute spatial contrast (Barlow, 1953; Hubel & Wiesel, 1959, 1962; Kuffler, 1953), a quantity that is closely related to the luminance ratio at an edge. But, ignoring for the moment the neurophysiological arguments for multiplying edge ratios, the same computational goal can be achieved by summing the luminance steps at the edges—with luminance now expressed in log units—along a path through the image:  The algorithmic procedures represented by Equations 1 and 2 are mathematically equivalent because multiplying luminance ratios is equivalent to summing luminance differences after converting raw luminance units to log units. 
In what follows, it will be useful to adopt the logarithmic version of the model. Because this version involves first computing steps in log luminances at edges then summing these local edge steps over an integration path through the image, I will refer to the principle that it embodies as edge integration. Contrary to Land and McCann, and in light of more recent neurophysiological data, I will argue below that a lightness algorithm that first computes local steps in log luminance across borders then spatially integrates these steps is more physiologically plausible than the original algorithm that multiples ratios across space. 
Equation 2 provides a means to compute the relative luminances of any two arbitrarily chosen mondrian patches. By assumption, the patches are all lit by the same spatially homogeneous illuminant, so the reflectance of any given patch will be related to its luminance by a factor that is common to all patches. It follows that Equation 2 can be used to recover the scale of relative reflectances of the patches within the mondrian. But to arrive at unambiguous reflectance estimates (that is, to compute judgments of lightness), something else is needed. That something is to specify the lightness of any one patch in the mondrian. The lightnesses of the other patches will then be automatically determined by the relative reflectance computations specified by Equation 2. The patch whose lightness serves to map the relative reflectance scale computed from Equation 2 to a perceptual lightness scale is referred to as the lightness “anchor.” 
In a later version of his theory, Land (1986a) advocated an approach to the anchoring problem in which the highest relative reflectance computed by retinex is designated as “white” (that is, interpreted to be a surface that has a high reflectance, say 90%). Land adopted this anchoring rule because experiments performed with human observers were thought to be consistent with the idea that the highest luminance in a scene always appears white. Luminance and reflectance are guaranteed to be proportional to one another because of the assumption made by retinex that the surfaces with in the mondrian are all identically illuminated. Therefore, the surface that is computed by retinex to have the highest relative reflectance will always correspond to the highest luminance in the input image. An anchoring rule that labels the largest global value in the output of the edge integration process as “white” will therefore be guaranteed to be consistent with the experimental observation that the highest luminance in a scene always appears white. We will revisit the question of whether the highest luminance anchoring rule accurately characterizes perceptual data later in this paper. 
Although retinex achieves color and lightness constancy for the limited sets of images that it was designed to deal with, it fails dramatically to account for human lightness percepts in many important situations. One key way in which retinex “gets it wrong” is in predicting that the lightness of a target patch should not be influenced by the luminances of other neighboring patches. In fact, it is well known that a target's lightness is influenced by its luminance contrast with respect to its local surround (Chevreul, 1839/1967). 
Retinex's failure to account for the effects of local spatial context on lightness stems from the fact that, in comparing the luminance of noncontiguous patches using Equation 1 or 2, the luminances of patches lying between the noncontiguous patches along the edge integration path cancel out the final computation. Gilchrist and colleagues (1999) introduced the term Type II constancy to refer to lightness constancy with respect to changes in the spatial context in which a surface is viewed, as distinguished from Type I constancy, or constancy with respect to changes in the illumination. Although Type I constancy holds to a high degree in human vision, Type II constancy, as a general rule, does not. Retinex achieves Type I constancy for the types of images that it was designed to take as its input (mondrians, homogeneous illumination), and thus closely mimics human vision in this respect. But retinex also exhibits Type II constancy, a behavior that fails to account for essential characteristics of human lightness perception. 
Edge integration theory applied to lightness matching with disk-annulus stimuli
I became interested in retinex theory while working on a quantitative model of lightness and darkness induction in disk-annulus displays (Rudd, 2001; Rudd & Arrington, 2001; Rudd & Zemach, 2004, 2005). The goal of the present work is to show how that model can be generalized to account for lightness percepts in other contexts, including contexts in which higher-level mechanisms have previously been invoked to account for the data. As a prelude to discussing lightness perception in other contexts, I will first review the model that accounts of lightness judgments made with disk-annulus stimuli. 
In the most basic experiment to which the model has been applied, two disks—each surrounded by an annulus—are presented on the two sides of an LCD monitor. The luminance of the right annulus is varied to influence the appearance of the right (target) disk, whose lightness is measured by having the observer adjust the luminance of the left (match) disk to achieve a lightness match between the two disks. The intensity of the annulus surrounding the match disk remains fixed. Figure 3 summarizes the results of experiments in which the match and target disks that were either both luminance increments (Rudd & Zemach, 2005), or both luminance decrements (Rudd & Zemach, 2004), relative to their surrounding annuli. Since the lightness matches made in these studies fall on approximately straight lines when the matches are plotted versus the luminance of the annulus surrounding the target on a log-log scale, the slope of the plot serves as a natural measure of the strength of the induction produced by varying the surround luminance. By this measure, the induction strength for decremental stimuli is 0.70; whereas, for increments it is 0.21. Thus, the induction strength for decrements was about 3.5 times the strength for increments in these experiments. Since the disk-annulus stimuli in two studies were otherwise comparable—the disks and annuli were the same size (disk diameter = 0.7°, annulus width = 0.35°), the stimuli were generated on the same monitor, and the experiments took place in the same room and with the same viewing distance—the difference in induction strength seems to be strictly attributable to the differences between incremental and decrement stimuli. 
Figure 3
 
Average lightness matches for incremental and decremental disk-annulus stimuli as a function of surround luminance on a log-log scale (data from Rudd & Zemach, 2004, 2005). Colored lines are least-square linear regression models of the matches, computed separately for incremental and decremental disks. The absolute values of the slopes of these models estimate induction strength (i.e., contrast effect size) for the two classes of stimuli. Solid lines indicate the slopes that would obtain if the observer matches the disks on luminance or disk/annulus luminance ratio. These theoretical match lines correspond to the matches that are predicted by Gilchrist's anchoring theory to hold in the case of incremental and decremental targets, respectively (see text for details). The luminances of the target disk and the annulus surrounding the match disk were (in units of log cd/m2): 0.5 and −0.112 (incremental disks); 0 and 0.6 (decremental disks).
Figure 3
 
Average lightness matches for incremental and decremental disk-annulus stimuli as a function of surround luminance on a log-log scale (data from Rudd & Zemach, 2004, 2005). Colored lines are least-square linear regression models of the matches, computed separately for incremental and decremental disks. The absolute values of the slopes of these models estimate induction strength (i.e., contrast effect size) for the two classes of stimuli. Solid lines indicate the slopes that would obtain if the observer matches the disks on luminance or disk/annulus luminance ratio. These theoretical match lines correspond to the matches that are predicted by Gilchrist's anchoring theory to hold in the case of incremental and decremental targets, respectively (see text for details). The luminances of the target disk and the annulus surrounding the match disk were (in units of log cd/m2): 0.5 and −0.112 (incremental disks); 0 and 0.6 (decremental disks).
To account for the disk-annulus results, my colleagues and I introduced a quantitative model that can be thought of as a modified version of the retinex edge integration algorithm. Like retinex, our model assumes that disk lightness is computed from a sum of the logarithms of the local luminance ratios computed at the inner and outer edges of the surround annulus:  where D, A, and B represent the luminances of the disk, annulus, and background field. 
Equation 3 is the same thing as a weighted sum of the steps in log luminance from the common background field to either the match or target disk. This equivalence is made manifest by writing Equation 3 in the form   
The algorithm that we used to fit our data differs from that of retinex in one important way. Retinex assumes that the weights given to all edges are equal, and in fact that all edge weights are equal to 1 (e.g., w1 = w2 = 1). This assumption stems from the fact that retinex multiplies edge ratios across space. Multiplying ratios translates into weighted summation with edge weights equal to 1 when luminance is converted to log units (refer to Equations 1 and 2 above). Contrary to retinex, we had to assume that the weight w2 associated with the outer edge of the surround annulus is smaller than the weight w1 associated with the inner edge in order to fit our lightness matching data. 
This modification represents a significant departure from retinex theory, both in terms of the original purpose of the theory (i.e., the accurate reconstruction of the physical reflectance scale), and the theory's ability to mimic human performance. If the theoretical goal is to achieve color constancy—that is, to perfectly reconstruct the pattern of reflectances in the image—then retinex's assumption that equal weights should be applied to all edges is necessary. However, retinex's assumption of equal weights is also responsible for its inability to account for observed failures of Type II constancy, including Chevreul's simultaneous contrast illusion and the results our experiments with disk-annuli stimuli. If the goal is to model human vision, then the assumption that the edge weights decline with distance improves the model performance and in fact is necessary to achieve a good fit to data. 
The idea that lightness is computed from a weighted spatial sum of neural edge-induction signals was first suggested by Reid and Shapley (1988), who modeled lightness as a spatial sum of Michelson contrasts, rather than as a spatial sum of steps in log luminance. In our 2004 study, we tested the Michelson contrast model against our algorithm and found that our model provided a better fit to data. I therefore adopt the “steps in log luminance” formalism here. But, conceptually, the current model is like the Reid and Shapley model in that both models envision lightness computation as the result of a distance-dependent edge integration mechanism. This idea is consistent with the hypothesis that the brain has a built-in assumption that edges in close proximity should be spatially grouped for the purpose of computing lightness, all other things being equal (Bressan, 2006b). 
In our 2004 study using decremental disks, we found a substantial monotonic falloff in the magnitude of the weight ratio w2/w1 as the annulus width increased from 0.06° to 2.48°. This falloff is explained in the model by the tendency of edge weights to decline with distance. Extrapolating the observed falloff in induction strength to annulus widths beyond the range that we investigated leads to an estimate of the range over with edge integration operates of about 10° (Rudd & Zemach, 2004). In natural images, distances between local contrast elements do not typically exceed 10° (e.g., van Hateren & Ruderman, 1998; van Hateren & van der Schaaf, 1998). This suggests that the reliance on spatial integration of local steps in log luminance to compute lightness is likely the norm in human vision. 
Distance-dependent edge integration meets the lightness–darkness asymmetry
Rudd and Zemach (2004, 2005) showed that the relative contributions of the disk/annulus and annulus/background edges to the perceptual computation of disk lightness can be estimated from the slopes of lightness matching plots using the formula w = 1 + β, where w = w2/w1 and β is the slope of the matching plot. Applying this formula to the plots in Figure 3 yields the weight ratio estimates wdec = 0.30 (decremental disks) and winc = 0.79 (incremental disks). That is, the perceptual weight given to the luminance step at the remote edge in computing the disk lightness is about 30% as large as the weight given to the inner edge in the case of decrements; and about 79% as large in the case of increments. The different quantitative results for increments and decrements contradict the tenet of retinex that all edge weights should be equal, in addition to other commonly accepted ideas about lightness processing. For example, Wallach's ratio principle asserts that the lightness of decremental disks depends solely on the luminance ratio between the disk and its annular surround (Wallach, 1948, 1963, 1976). But if the ratio principle held for our decremental displays, we would expect to find that wdec = 0. Gilchrist's anchoring theory (Gilchrist, 2006; Gilchrist et al., 1999) adopts the ratio rule and makes the additional claim that the appearance of incremental disks should be unaffected by surround luminance (see Figure 3 caption and further discussion of anchoring theory below). But if the lightness of incremental disks was unaffected by changes in surround luminance, we would expect to find that winc = 1. Neither prediction is verified by our analyses. These violations have been discussed in more detail in earlier papers (Rudd, 2010; Rudd & Popa, 2007; Rudd & Zemach, 2004, 2005, 2007). 
In what follows, I argue that the difference in the weight ratios estimated from the data associated with incremental versus decremental disks has a natural interpretation in terms of a neural mechanism that assigns different weights to edges based on the edge contrast polarity—the direction of the luminance step relative to the target location—then spatially integrates the weighted edge signals to compute lightness. In the simplest version of such a model, the dependence of the edge weights on the edge contrast polarity and the spatial falloff in edge weights with distance from the target would combine as independent factors to determine the overall weight assigned to an edge. According to this idea, the weight w(z,p) assigned to an edge is the product of two factors: one of which depends on the distance z of the edge from the target, and one of which depends on the edge contrast polarity p. That is,  where λp is a number that depends only on the edge contrast polarity and ν(z) is monotonically decreasing function of distance. 
If the independent factors model (Equation 5) is correct, then the average weight w2 associated with the outer edge of the annulus should have been equal in our studies with decremental (Rudd & Zemach, 2004, Experiment 1) and incremental (Rudd & Zemach, 2005) disk-annulus stimuli because the outer edges in the two studies were the same distance from the target (z2 = 0.35°) and had the same contrast polarity. This contrast polarity will be denoted by the symbol p2 = +, where the subscript 2 signifies that it is the contrast polarity associated with the outer annulus edge and the + sign is shorthand for “the light side of the edge points towards the target.” 
The inner edges of the annuli were also the same distance from the target (z1 = 0°) in the two experiments, but the contrast polarities of the inner edges differed. If the independent factors model (Equation 5) is correct, then any difference in the inner edge weight w1 in the two studies must have been due to a difference in the contrast polarity p1 of the disk/annulus edge, which assumes the value + when the disk is a luminance increment vis-à-vis the annulus, and−when the disk is luminance decrement. This would, in turn, imply that there must be a difference in the magnitudes of the contrast polarity-dependent factors λ+ and λ associated with edges whose light and dark sides point towards the target. 
On the basis of this independent-factors model, we compute  Equation 6 provides us with a means to estimate the relative strength of edge-based lightness and darkness induction from the lightness matches made in our experiments with incremental and decremental disk-annulus displays, as follows:  Equation 7 tells us that the weight associated with an edge whose light side points towards the target is only about 38% as large as the weight associated with an edge whose dark side points towards the target, after the effects of distance have been factored out. 
It should be emphasized that the estimate of 38% was derived from matching data that has been averaged across observers. Thus, the appropriate conclusion to draw from Equation 7 is that edge-based lightness induction is only about 38% as strong as edge-based darkness induction, on average. The matching slopes obtained from our experiments with incremental disk-annulus stimuli, in particular, exhibited a high degree of intersubject variability (Rudd, 2010; Rudd & Zemach, 2005), so the estimate of the relative edge weights associated with light-inside versus dark-inside edges (Equation 7) is subject to error due to individual differences, in addition to any other sources of measurement noise that may contribute to its measurement. The individual slopes for decrements were much less variable than the slopes for increments (Rudd & Zemach, 2004). 
The problem of intersubject variability has been addressed in an earlier paper in which I showed that some of the intersubject differences could be explained on the basis of cognitive factors that can be manipulated by instructions (Rudd, 2010). There I showed that it is possible to incorporate the influence of cognitive factors modeled into the edge integration model by allowing for top-down control of the edge weights. In what follows, I will ignore the issue of intersubject variability and focus on the average data. The modeling carried out in the present paper should be viewed as complementary to the modeling presented in the earlier paper. The current version of the edge integration model can be understood in its entirety only from the content of both papers. 
Although the quantitative conclusion that the inherent strength of edge-based lightness induction is only about 38% as large as that of edge-based darkness induction applies only to data averaged across observers, the qualitative conclusion that lightness induction is less strong than darkness induction holds even in the case of individual observers. This conclusion follows from the fact that the steepest matching plot slope for any of the three observers in our experiment using incremental disk-annulus stimuli was only about half as steep as the slopes obtained for the four observers in our experiment with decremental disk-annulus stimuli (Rudd & Zemach, 2004, 2005), which did not vary substantially across observers. Thus, the weights associated with light-inside disk/annulus edges were always less than the weights associated with dark-inside disk/annulus edges, even allowing for any top-down influences on the edge weights that might have been operating in these experiments. 
My working hypothesis is that both the strong quantitative conclusion that edge-based lightness induction is only about 38% as strong as edge-based darkness induction, on average, and the weaker qualitative conclusion that edge-based lightness induction is generally weaker than edge-based darkness induction should hold for stimuli other than disk-annulus patterns and thus may form part of a more general quantitative model of lightness computation. The following experiment was conducted to test these theoretical predictions. 
Experiment: Lightness induction is weaker than darkness induction
Method
The experiment was conducted in a dimly lit room, the walls of which were covered in matte black material. Stimuli were presented on a 22-inch LCD flat panel monitor (Apple Cinema display, Apple, Inc., Cupertino, CA) under the control of an experimental program written in the MATLAB programming language (MathWorks, Natick, MA) using the Psych-Toolbox application (Brainard, 1997; Pelli, 1997). 
The visual stimulus is diagrammed in Figure 4 (not to scale). Two identical squares (3.16 cd/m2, 1.17° wide) were presented simultaneously on the left and right sides of the monitor, each surrounded by a higher-luminance frame. The luminances of the two frames were equal (10 cd/m2), but the frame surrounding the target square (right side) was considerably narrower than the frame surrounding the match square (left side) (0.19° compared to 1.78°). The center-to-center distance between the squares was 8.5°. The observer viewed the display binocularly at a distance of 0.87 m. The position of the observer's head was fixed by the use of a chin rest; otherwise, free viewing was allowed. 
Figure 4
 
Matching the display used in the current experiment. Two identical decremental square patches were surrounded by solid frames having identical luminances, but different widths. The width of the frame surrounding the right (target) square was 0.19°; the width of the frame surrounding the left (match) square was 1.78°. The luminance of the background field was varied over a range that included background luminances lower than the luminances of both squares; background luminances in between the luminances of the squares and frames; and background luminances greater than the frames' luminance. At each of 12 background luminance values, the observer adjusted the match square luminance to match the two squares in lightness.
Figure 4
 
Matching the display used in the current experiment. Two identical decremental square patches were surrounded by solid frames having identical luminances, but different widths. The width of the frame surrounding the right (target) square was 0.19°; the width of the frame surrounding the left (match) square was 1.78°. The luminance of the background field was varied over a range that included background luminances lower than the luminances of both squares; background luminances in between the luminances of the squares and frames; and background luminances greater than the frames' luminance. At each of 12 background luminance values, the observer adjusted the match square luminance to match the two squares in lightness.
The luminance of the background field—against which the squares and frames were presented—was varied over the range 1.7–31.6 cd/m2 in 12 steps of equal RGB units. At each background luminance, the observer matched the two squares in lightness by adjusting the match square luminance using a control device designed for digital film editing (Contour Design, ShuttlePRO Model SP-JNS, Contour Design, Inc., Windham, NH). 
Two observers ran the experiment in separate sessions. Each session took about 1 hour, but the observers were free to work at their own pace. Within each block of trials, each of the 12 background luminances was presented three times, for a total of 36 trials per block. The order of the 36 conditions was randomized independently within each block. Each observer completed six blocks of trials. Thus, in total, each observer completed 18 trials at each of the 12 background luminances. 
The study protocols were approved by the Institutional Review Board at the University of Washington and were in compliance with the Declaration of Helsinki. 
Model predictions
In order for changes in the background field luminance to influence the lightness matches made between the match and target squares, it was necessary that there be an asymmetry in the squares or surround frames on the two sides of the display. Otherwise, any change in the background luminance would have affected the target and match square lightnesses equally. The required asymmetry was produced by making the frame widths on the two sides of the display different. 
According to the distance-dependent edge integration theory, the condition that must hold in order for the squares to match in lightness is that the weighted sum of directed steps in log luminance at the inner and outer frame edges are the same on the two sides of the display. This condition is expressed by the following equation:  where SM and ST are the luminances of the match and target squares; FM and FT are the luminances of the frames surrounding the match and target squares; B is the luminance of the background field; and w2M and w2T denote the weights associated with the outer frame edge on the match and target sides of the display, respectively. 
Equation 8 was solved to obtain the following expression for the model observer's match square setting in log units:  where the symbols wM and wT denote the weight ratios w2/w1 on the match and target sides of the display. 
Equation 9 predicts that a log-log plot of the observer's match square settings versus the background luminance will fall on a straight line having the slope −(wTwM). The results of previous experiments lead us to expect that the edge weights will decrease in magnitude as a function of the distance of the edge from the square whose lightness is being computed. This leads to the further prediction that wT > wM since the outer edge of the frame is closer to the square on the target side of the display. Thus, distance-dependent edge theory alone—without any supplementary assumptions about the relative strengths of lightness and darkness induction—predicts that the log-log plot of match square settings versus background field luminance will be a straight line having a negative slope. 
The independent factors model assumes, more specifically, that the distance of an edge from the target and the edge contrast polarity combine as independent factors to determine the contribution of that edge to the target lightness. The results obtained in the previous section by applying the independent factors model to the data from our disk-annulus experiments suggest that the weights assigned to edges are larger when the dark side of the edge points towards the target when the effects of distance controlled. On this assumption, the weight ratios wM and wT in Equation 9 are predicted to be larger in the present experiment when the background luminance is greater than the frames' luminance that when the background luminance is less than the frames' luminance. Thus the independent factor model, in particular, predicts that the absolute value of the slope of the matching plot will be greater when the background field luminance exceeds the common luminance of the target and match frames than when the background field luminance is less than the luminance of the frames. In other words, the slope of the data plot should suddenly increase when the contrast polarity of the outer edges of the match and target frames flips from being light-inside to dark-inside. 
Figure 5 illustrates the model predictions. On a log-log plot, the match square settings are expected to fall on a straight line having a relatively shallow slope for background luminances are less than the frames' luminance, and on another straight line having a somewhat steeper slope for background luminances that are greater than the frames' luminance. A discontinuity in the plot is predicted to occur at the point where the background luminance transitions from being less than, to greater than, the frames' luminance. This prediction is expected to hold for each experimental observer individually. 
Figure 5
 
Lightness matching predictions of the distance-dependent edge integration model supplemented by the rule that edge weights are larger for edges whose dark sides point towards the target than for edges whose light sides point towards the target. The model predicts (1) that the match settings will decrease linearly as a function of background luminance on a log-log scale, and (2) that the absolute value of the slope of the linear function will be larger when the background luminance exceeds the common luminance of the target and match frames then when it is less than the frames' luminance.
Figure 5
 
Lightness matching predictions of the distance-dependent edge integration model supplemented by the rule that edge weights are larger for edges whose dark sides point towards the target than for edges whose light sides point towards the target. The model predicts (1) that the match settings will decrease linearly as a function of background luminance on a log-log scale, and (2) that the absolute value of the slope of the linear function will be larger when the background luminance exceeds the common luminance of the target and match frames then when it is less than the frames' luminance.
An additional prediction regarding the relative values of the matching plots slopes for background luminances that are either less than or greater than the frames' luminance can be made on the basis of the quantitative analysis presented in the previous section. That analysis suggested that a properly parameterized version of the independent factors model would assign edge weights such that the weights assigned to edges whose light sides point towards the target receive weights are only about 38% as large as the weights assigned to edges whose dark sides point towards the target, after controlling for distance. A version of the independent factors model in which edge weights are parameterized according to this rule predicts that the matching plot slope for background luminances less than the common frames' luminance should be about 38% as large as the matching plot slope for background luminances greater than the frames' luminance. It should be noted that this prediction applies only to average data because the statistical estimate of 38% for the strength of lightness induction relative to the strength of darkness induction was obtained by averaging the disk-annulus matching data across observers. 
The actual experimental results are shown in Figure 6. Two of the model predictions are clearly verified for both subjects. First, the log-log plots of the matches versus the background luminance are approximately linear both for background intensities less than the frames' luminance and for background intensities greater than the frames' luminance. Second, the slope is steeper for background intensities greater than the frames' luminance. The success of the third, quantitative, prediction is less clear because the ratio of the least-squares slope estimates for low versus high backgrounds differs for the two subjects. For AH, the slope calculated over backgrounds whose luminances are less than the frames' luminance is about 0.31 times as large as the slope corresponding to background whose luminances are larger than the frames' luminance. For JA, it is about 0.21 times as large. The average ratio is 0.26. The average results of the present experiment are thus consistent with a version of the independent factors model in which the weights assigned to light-inside edges are about 26% as large as the weights assigned to dark-inside edges, rather than 38%. 
Figure 6
 
Dependence of lightness matches on the luminance of the remote background field. The data from each observer have been fitted with least-squares linear regression models, separately for background luminances above and below the frames' luminance. The linear model accounts for 86% of the variances in the lightness matches, averaged across observers and background luminance ranges. The model prediction that the absolute slope of the matching plot should be larger for backgrounds exceeding the frames' luminance is confirmed, independently for each observer.
Figure 6
 
Dependence of lightness matches on the luminance of the remote background field. The data from each observer have been fitted with least-squares linear regression models, separately for background luminances above and below the frames' luminance. The linear model accounts for 86% of the variances in the lightness matches, averaged across observers and background luminance ranges. The model prediction that the absolute slope of the matching plot should be larger for backgrounds exceeding the frames' luminance is confirmed, independently for each observer.
Given the high degree of intersubject variability both in the current experiment and in our earlier experiment with incremental disk-annulus stimuli on which the quantitative prediction depends, the observed slope ratio of 26% is perhaps not too far off the predicted ratio of 38%. Nevertheless, there are potential explanations of the discrepancy other than between-subjects variability. Another possibility is that the hypothesis that distance and contrast polarity combine independently to control the edge weights is wrong. Perhaps these two factors instead interact to determined the edge weights. In that case, the logic used to make the quantitative predictions for the current experiment is flawed. Yet another possibility is that the lightness parameters that hold for disk-annulus stimuli are not applicable to squares and frames. Further experiments would be required to rigorously test these alternative hypotheses. 
Despite these caveats, I review in the following sections results from other experimental paradigms that independently support the hypothesis that the average strength of edge-based lightness induction is approximately 26%–38% that of edge-based darkness induction. Overall, the data to be reviewed are consistent with a version of edge integration theory in which lightness induction is about 1/3 as strong as darkness induction—in other words, about in the middle of the range 26%–38% estimated from the experimental results discussed to this point. 
Edge integration theory and Stevens' brightness law
In this section I establish a theoretical link between S. S. Stevens' power law model of brightness (perceived luminance) and the edge integration model of lightness computation described above. Stevens' law asserts that brightness is related to physical intensity (luminance) by a power law of the form  where Ψ denotes brightness; L is luminance; and k and γ are constants. The parameter γ is the brightness law exponent. A large body of previous work indicates that the brightness law exponent for a small light source in an otherwise dark room is 1/3 (Marks, 1974, 1977; Onley, 1961; J. C. Stevens, 1967; J. C. Stevens & Marks, 1999; S. S. Stevens, 1953, 1961, 1967, 1972, 1975). That is, brightness varies in proportion to the cube root of physical intensity under such conditions. 
Of course, matching two stimuli on the basis of their brightness is the same thing as matching them on the logarithm of brightness. If brightness is related to luminance by the power law (Equation 10), then the logarithm of brightness is related to the logarithm of luminance by the linear function:   
Note that Equation 11 is similar in form to the expression that has previously been used to model the lightness of a disk surrounded by an annulus, both of which are presented against a larger background field:  where the disk lightness ΛD is defined by a luminance match that will be influenced by the context in which the match stimulus is viewed. The effects of spatial context are represented by the unknown factor Ω in Equation 12
The special case in which an incremental disk is presented against a homogeneous background field (e.g., in a dark room) is modeled by setting A = B. In that case, Equation 12 simplifies to  where terms have been rearranged to show that Equation 13 has the same mathematical form as Equation 11, the expression that describes the disk brightness under the same experimental conditions. Since the constant k in Equation 11 is a free parameter, Equations 12 and 14 can be set equal by a proper choice of constants, which raises the question: Do these two expressions model the same perceptual phenomenon? 
I propose that the answer must be “yes” because there is no way to distinguish lightness from brightness in the special case of a small incremental disk viewed in an otherwise dark room. While it is true that the disk may appear self-luminous—and therefore bright—both light sources and reflective surfaces can appear self-luminous. Bonato and Gilchrist (1994, 1999) studied self-luminosity and concluded that luminosity—or at least the luminosity threshold—belongs to the same perceptual scale as lightness. So perceived self-luminosity is not the same thing as brightness. 
To put it another way, although lightness and brightness have different definitions in the mind of the experimenter (the first is perceived reflectance and the second is perceived luminance), the two percepts may be phenomenologically indistinguishable to the observer in the special case of a small incremental spot viewed in the dark. If it is granted that Equation 13 models that same phenomenon as Equation 11, then the exponent of Stevens' brightness law for the small spot must be the same thing as the weight w1 assigned to the disk edge in the lightness model when the disk is a luminance increment. The equivalence of Equations 11 and 13 under these special—admittedly rarefied—experimental conditions is thus a kind of “Rosetta Stone” that allows us to begin to fix the parameters for the edge integration model from our knowledge of the exponent of Stevens' brightness law. 
To make further progress in this direction, we note that the equivalence of the brightness law exponent and the disk edge weight in the edge integration model applies only to the case of incremental disks. That is, the weight w1 in the lightness model is 1/3 only when the disk edge has a light-inside contrast polarity. The data presented in previous sections suggests that the weight associated with dark-inside edges is roughly 2.6–3.8 times as large as the weight associated with light-inside edges—these values are the inverses of 38% and 26%. Therefore, the same weight cannot apply to both light-inside and dark-inside edges. If the weight associated with light-inside edges is 1/3, then the weight associated with dark-inside edges must be about 1. Thus, in order for the theory developed to this point to be self-consistent, the perceived darkness of a small, decremental spot viewed against a sufficiently large white background should be linearly related to the absolute value of the spot's decremental luminance relative to the background luminance. I am currently testing this model prediction. 
Insulation and compression in the staircase Gelb effect explained by edge integration theory and the lightness–darkness asymmetry
In this section, I show how the edge integration model can be extended to account for lightness judgments in the staircase Gelb paradigm (Cataliotti & Gilchrist, 1995; Gilchrist & Cataliotti, 1994; Gilchrist et al., 1999). The material presented in this section has previously been published in brief abstract form (Rudd, 2009). 
In a classic study, Gelb (1929) demonstrated that a low reflectance paper—one that appears black under normal viewing conditions—appears white when viewed in isolation in a bright spotlight within an otherwise dimly lit room. But when an actual white paper is introduced into the spotlight next to the black paper, the white paper appears white and the black paper is seen to be darker by comparison. Cataliotti and Gilchrist (1995; see also Gilchrist & Cataliotti, 1994) replicated Gelb's work and performed several novel experimental variations on it. Their main goal was to determine whether the darkening of the black paper that occurs when the white paper is added can be explained by local contrast alone, or whether the effect is produced by a global perceptual mechanism that compares the luminance of each paper to the highest luminance paper within the spotlight. 
They began by placing a series of papers having different reflectances in a spotlight, arranged in order from highest to lowest reflectance, and thus from highest to lowest luminance. Observers made Munsell matches to each of the Gelb papers to measure the paper lightnesses. Figure 7 shows the average matches for a version of the experiment in which the series contained five papers. The values on the x-axis are the paper luminances expressed as ratios relative to that of the highest luminance paper. The latter had a reflectance of 90% and a luminance of 124.2 cd/m2. The data are plotted on a log-log scale. 
Figure 7
 
Lightness matches made to the papers in a five-paper staircase Gelb display using a Munsell chart. The range of matches made when the papers are viewed in a spotlight is compressed relative to the actual luminance range of the papers. Luminance values on the x-axis are reported as luminance ratios relative to the luminance of the highest reflectance paper (124 cd/m2; 90% reflectance). Data from Cataliotti and Gilchrist (1995); figure from Gilchrist et al. (1999), used with permission.
Figure 7
 
Lightness matches made to the papers in a five-paper staircase Gelb display using a Munsell chart. The range of matches made when the papers are viewed in a spotlight is compressed relative to the actual luminance range of the papers. Luminance values on the x-axis are reported as luminance ratios relative to the luminance of the highest reflectance paper (124 cd/m2; 90% reflectance). Data from Cataliotti and Gilchrist (1995); figure from Gilchrist et al. (1999), used with permission.
Two aspects of these results are of special relevance here. First, the paper with the highest reflectance was judged to be white, consistent with the earlier findings of Gelb. Second, the other Munsell matches exhibited a pattern of lightness errors such that the range of judged reflectances was substantially reduced relative to the range of actual reflectances. This second result was a new finding, which the authors referred to as “lightness compression.” 
Munsell papers are constructed to be equidistant on the scale of perceived lightness (Munsell, 1905; Newhall, Nickerson, & Judd, 1943). So the fact that the Munsell matches in Figure 7 fall on a straight line on a log-log plot implies that steps of equal reflectance ratios correspond to equal lightness ratios. This means that lightness is related to luminance by a power law. If the size of the steps in log lightness accurately mirrored the corresponding physical steps in log luminance, the exponent of the power law would be 1 and lightness would be linearly related to both luminance and reflectance. Instead, a step in log reflectance (or log luminance) produced a smaller step in log lightness, implying that the power law relating lightness to luminance has an exponent smaller than 1. 
In Figure 8, I have replotted the Munsell matches on a linear luminance scale and fitted the data with a power law regression model. The exponent of the least-squares model is 1/3: the same exponent that governs Stevens' brightness law. This striking correspondence between the power laws governing brightness and the lightness of the Gelb papers seems unlikely to be a coincidence. Again, it suggests that the brightness and lightness share a common underlying neural mechanism whose response magnitude varies as the cube root of luminance. 
Figure 8
 
Replot on a linear scale of the compression data shown in Figure 7. The lightness matches have been fitted with a power law regression model. The power law exponent of the least-squares model is 1/3, the same exponent that characterizes Stevens' power law model of brightness. Data from Cataliotti and Gilchrist (1995).
Figure 8
 
Replot on a linear scale of the compression data shown in Figure 7. The lightness matches have been fitted with a power law regression model. The power law exponent of the least-squares model is 1/3, the same exponent that characterizes Stevens' power law model of brightness. Data from Cataliotti and Gilchrist (1995).
Recall that the original goal of Cataliotti and Gilchrist's (1995) study was to pit a local contrast explanation of the Gelb effect against a theory based on a global comparison of each paper's luminance to the highest luminance paper. To this end, Cataliotti and Gilchrist reordered the papers to see whether the proximity of a target paper to the highest luminance paper affected the paper's lightness. It did not, which argues against the local contrast explanation. 
The authors specifically considered edge integration as a possible explanation of their results. Since Reid and Shapley's (1988) results had indicated that the weights given to edges decline with distance from the target, Cataliotti and Gilchrist reasoned that edge integration would predict that lightness should be affected by the ordering of the papers, an effect that was not found. Thus, they concluded that the lightness depends on a global process that compares each paper's luminance to that of the highest luminance paper. The theory that they put forth as to explain their results—“anchoring theory”—is described in more detailed and critiqued below. Here, I simply note that all of the other experimental findings reviewed to this point are consistent with an edge integration theory in which the edge weights decline with distance. 
In another experiment, Cataliotti and Gilchrist surrounded the five Gelb papers with a single white frame. The frame greatly reduced the magnitude of the compression effect, bringing the lightness scale roughly in line with the physical reflectance scale (Gilchrist & Cataliotti, 1994). The authors speculated that the frame served to insulate the papers from being perceptually compared to other surfaces in the larger illumination framework corresponding to the room outside of the spotlight. In the absence of the white frame, the authors reasoned, such comparisons might influence the paper lightnesses and thus potentially explain the compression effect. However, they did not offer an explanation of why lightness compression, in particular, should result when the papers were compared to surfaces within the larger illumination framework of the room. 
The Gelb results compose an important data set that any candidate theory of lightness perception should be able to explain. In what follows, I propose an account of compression and insulation that can explain them and that differs from the account given by anchoring theory. The theory is based on distance-dependent edge integration, the class of explanation that Cataliotti and Gilchrist (1995) believed their experiments to have ruled out. I show how a two-factor distance-dependent integration theory, in which the strengths of lightness and darkness induction differ quantitatively, can account for the staircare Gelb data—including the compression and insulation effects—in a quantitatively exact way. 
In the version of edge integration theory that Cataliotti and Gilchrist considered and rejected, the path of edge integration was assumed to originate at the location of the highest luminance paper and propagate from there to the location of each paper in the Gelb series. This is consistent with Land and McCann's assumption that the only edge integration paths that contribute to lightness computation are ones that begin on a highest luminance mondrian patch. The edge integration theory proposed here instead assumes that the edge integration path begins within the region of the common background that surrounds all five Gelb papers and propagates from there to the location of each individual paper by the shortest route. 
When the white frame is absent, a single edge only is encountered along the path from the background to any given Gelb paper. Therefore, the only edges that contribute to the lightness computation are the edges that form the border between the paper whose lightness is being computed and the dark background against which the papers are presented. Despite the fact that each edge integration path from the background to a paper crosses only a single edge, the edge integration model proposed here—by assumption—integrates any steps in log luminance that are encountered along the edge integration path. When no white frame surrounds the Gelb papers, there is only one step in log luminance to “integrate.” On the other hand, when the Gelb papers are surrounded by the white frame, there are two steps to integrate: the steps at the inner and outer edges of the white frame. In the case of the disk-annulus stimuli discussed previously, there are also two edges to integrate along the path from the common background to each disk in the display: namely, the inner and outer edge of the disk's surround annulus. The paths over which edge integration is presumed to occur in the theory proposed here are illustrated in Figure 9 for each of the two display types. 
Figure 9
 
Integration paths used to calculate target lightness in two display configurations. (a) Disk-annulus stimulus. (b) Staircase Gelb stimulus. For each configuration, edge integration is carried out only along paths leading from the common background or surround to the target, as indicated by the red arrows. Red lines indicate edges that take part in the edge integration calculation. Edges marked with red X's do not take part in the edge integration calculation.
Figure 9
 
Integration paths used to calculate target lightness in two display configurations. (a) Disk-annulus stimulus. (b) Staircase Gelb stimulus. For each configuration, edge integration is carried out only along paths leading from the common background or surround to the target, as indicated by the red arrows. Red lines indicate edges that take part in the edge integration calculation. Edges marked with red X's do not take part in the edge integration calculation.
The proposed edge integration theory accounts for the compression and insulation effects by taking into consideration the different weights associated with edges whose contrast polarities are light-inside or dark-inside. When the white frame is absent, the common surround has a luminance that is less than the luminance of each Gelb paper. The direction of the step in log luminance in going from the dark surround to each paper is from dark to light. When the white frame is present, the step in log luminance from the white frame to each Gelb paper is from light to dark. In particular, I argued that the weight given to each light-inside step is 1/3 and the weight given to each dark-inside step is 1. 
In previous sections, I argued that edge weights differ depending on whether the light or dark side of the edge points towards the target. Therefore, when the white frame is present, the weight associated with the edge between the white frame and each Gelb paper is 1. A weight of 1 means that the visual system accurately represents the magnitude of the step in log luminance in going from the white frame to each paper. This is the same thing as saying that the reflectance ratio between each paper and the common white surround is represented veridically. It follows that the lightnesses of the five papers should also be related to each another by ratios that are the same as the actual physical reflectance ratios of the papers. This prediction—the prediction of a veridical lightness scaling—is consistent with the results obtained by Gilchrist and Cataliotti (1995) when the white frame was present. 
Contrary to the “insulation” theory presented by Cataliotti and Gilchrist (1995) to explain the effect of the white border, the model proposed here assumes that the white frame also serves to perceptually integrate the papers with the larger illumination framework of the room. Edge integration is assumed to occur across both the inner and outer borders of the frame, just as it occurs across the inner and outer borders of each annulus in a disk-annulus display. Nevertheless the perceptual integration of the papers with the larger framework of the room is expected to be weaker than the integration of the papers with the local white surround, simply because edge weights decay with distance. However, the fact that the outer frame edge contributes to the edge integration computation does not alter the conclusion that the lightness of the Gelb papers scale veridically with respect to one another. The outer frame edge simply adds a pedestal effect to the lightness computation that increases each of the paper lightnesses by the same common factor. 
When the white frame is absent, the weight associated with the step in log luminance going from the dark surround to each Gelb paper is 1/3. The lightness of each paper is therefore related to the step size in linear units of luminance by a power law having an exponent equal to 1/3. As a result of this fact, the paper lightnesses will relate to each other by a 1/3 power law of their luminance differences. The model thus explains why the lightnesses of the Gelb papers are related to one another by a 1/3 power law of luminance when the white frame is absent, but scale veridically when the white frame is present. The same edge integration model that accounts for these compression and insulation effects also explains the lightness matching results from the disk-annulus and square-frame experiments discussed earlier; and it does this in a quantitatively exact way using a consistent set of assumptions about the contrast polarity dependence of the edge weights and the rules governing the paths over which edge integration is carried out. 
A comparison with—and critique of—anchoring theory
The fact that increments and decrements are processed differently by the visual system has been known since at least the work of Wallach (1948, 1963, 1976) and Heinemann (1955, 1972), each of whom used a matching technique to quantify the lightness of a disk surrounded by an annulus. Wallach studied the case of disks that were luminance decrements with respect to the annulus; Heinemann studied the case in which they were luminance increments. 
Heinemann found that the lightness of incremental targets was little affected by changes in surround luminance. This outcome has also been reported by several other authors (Agostini & Bruno, 1996; Diamond, 1953; Economou, Zdravković, & Gilchrist, 2007; Gilchrist, 1988; Jacobsen & Gilchrist, 1988; Kozaki, 1963, 1965). Wallach found that decremental disks looked alike whenever the ratio of the disk luminance divided by the annulus luminance was the same on the two sides of the display. Thus, for decrements, the surround had a very significant effect on the target lightness. 
A theory that reconciles these different patterns of results was proposed by Wallach (1948, 1963, 1976), following Koffka (1935) (Gilchrist, 2006). According to Wallach's theory, the highest luminance within a uniformly illuminated region is always seen as white (i.e., it is interpreted as a surface of high reflectance) and the lightnesses of all other surfaces appearing within that same framework of illumination are determined by their luminance ratios with respect to the white surface. In applying these two rules to the disk-annulus paradigm, Wallach proposed that the disk-annulus stimuli on the two sides of the display are interpreted as belonging to independently illuminated regions of the scene. Historically, Wallach's ideas were taken over into anchoring theory, where they have been applied to a larger set of lightness phenomena and supplemented with new theoretical principles (Gilchrist, 2006; Gilchrist et al., 1999; Gilchrist & Radonjić, 2009, 2010). 
The idea that the highest luminance is seen as white and the lightnesses of other surfaces are computed from the surface's luminance ratio with respect to the white point has also been adopted in other theories of lightness perception, including retinex (Land, 1986b), where it takes the form of an edge-based computation that calculates luminance ratios with respect to the white point for all of the surfaces in a mondrian. 
I will argue in what follows that neither the idea that the highest luminance always appears white, nor the idea that the lightness of the other surfaces within a framework of illumination are determined by their luminance ratios with respect to the white surface, holds up under scrutiny. Instead, I assert that the distance-dependent edge integration model proposed in the earlier sections of this paper gives a better account of the results that Wallach's theory was proposed to explain. To make this argument, I will first critique the evidence for highest luminance anchoring, then I will critique the evidence for ratio scaling. 
In support of anchoring theory, Gilchrist and his colleagues have presented evidence from three main classes of experiments. In the first class of experiment, an observer's head is placed in a dome, the inside of which is painted in two achromatic shades. The observer judges the lightness of each of the two achromatic surfaces by making Munsell matches. The experimental results indicate that the lighter of the two surfaces—the one with the highest luminance—always appears white. Importantly, this outcome applies regardless of whether the paint is actually white or very different than white; for example, dark gray (Gilchrist, 2006; Gilchrist & Cataliotti, 1994; Gilchrist et al., 1999; Gilchrist & Radonjić, 2009; Li & Gilchrist, 1999). The second class of experiments consists of the staircase Gelb experiments reviewed in the previous section (Cataliotti & Gilchrist, 1995; Gelb, 1929; Gilchrist, 2006). Here again, the surface having the highest luminance among those appearing within the illumination framework produced by the spotlight appears white, regardless of its actual reflectance. 
However, despite that fact that the results of the Gilchrist dome and staircase Gelb experiments are consistent with highest luminance anchoring, they are equally consistent with an alternative anchoring rule, which says that it is not the highest luminance, but rather the highest lightness, that appears white (Kingdom, 2011; Rudd & Zemach, 2005). Like one of the reviewers of this paper, the reader may wonder if I am proposing to explain lightness with lightness. The reasoning may sound circular, but the statement that it is actually the surface with the highest lightness that always appears white does in fact hold up to scrutiny. Hypothetically, the perceived gray level of the surface with the highest lightness in a dome or staircase Gelb display could depend on the other surfaces in the display. In that case, the highest lightness would not necessarily appear white because its appearance would change with context. Or if lightness perception was veridical, then the surface with the highest lightness would appear to be the shade of gray that it actually is. Again, this would violate the highest lightness rule. 
These examples should make it clear that the highest lightness anchoring rule is not self-evident. Nevertheless, neither the dome experiments nor the staircase Gelb results can decide whether it is the surface with the highest luminance or the surface with the highest lightness that always appears white because the surfaces having the highest lightness are always the same as the highest luminance surfaces in these paradigms. To decide between the two anchoring rules, we need to examine evidence from the third class of experiments cited by Gilchrist and colleagues in support of anchoring theory: namely, experiments based on disk-annulus stimuli. Such experiments have already been discussed in some detail above. Here we focus on what these experiments tell us about highest luminance versus highest lightness anchoring and why they actually argue against anchoring theory. 
In our 2005 experiment using incremental disk-annulus displays, Zemach and I found evidence that contradicts highest luminance anchoring but is nevertheless consistent with highest lightness anchoring. The average matching data from this study are presented in Figure 6, where they have been fit with a least-squares linear regression model. The model indicates that the slope of the plot relating target disk lightness (in log units) to the luminance of the annulus surrounding the target disk (also in log units) is −0.2. This small, but significantly negative, slope indicates the existence of a small lightness induction effect from the surround (Rudd & Zemach, 2005). 
Figure 10 illustrates this induction effect in the form of a simple perceptual demonstration, which is included here to convince the reader that the surround induction for increments is both real and perceptually salient. The two disks in the figure have the same luminance, and this luminance is the highest luminance in the figure as a whole, as well as the highest luminance within each individual disk-annulus configuration. According to highest luminance anchoring, these two disks should both appear white and thus equally light. But the left disk appears lighter. 
Figure 10
 
Contrast induction in an incremental disk-annulus stimulus, also known as a “wedding cake” stimulus (Rudd & Zemach, 2005). The two incremental disks in the figure have the same luminance, but the disk with the lower surrounding annulus luminance (left side) appears lighter. The distance-dependent edge integration model explains the contrast effect on the basis of the hypothesis that the perceptual weight given to the oriented luminance step at the annulus/background edge is smaller than the weight given to the luminance step at the disk/annulus edge. The total luminance step from the background to the disk (in log units) is the same for both disk-annulus stimuli, but the total step in lightness is smaller when the edge that gets the smaller perceptual weight comprises a larger portion of the total luminance step. In general, the model tends to squash the first tier of the wedding cake stimulus profile relative to the second tier in the process of mapping luminance to lightness.
Figure 10
 
Contrast induction in an incremental disk-annulus stimulus, also known as a “wedding cake” stimulus (Rudd & Zemach, 2005). The two incremental disks in the figure have the same luminance, but the disk with the lower surrounding annulus luminance (left side) appears lighter. The distance-dependent edge integration model explains the contrast effect on the basis of the hypothesis that the perceptual weight given to the oriented luminance step at the annulus/background edge is smaller than the weight given to the luminance step at the disk/annulus edge. The total luminance step from the background to the disk (in log units) is the same for both disk-annulus stimuli, but the total step in lightness is smaller when the edge that gets the smaller perceptual weight comprises a larger portion of the total luminance step. In general, the model tends to squash the first tier of the wedding cake stimulus profile relative to the second tier in the process of mapping luminance to lightness.
Distance-dependent edge integration theory explains this result on the basis of Equations 3 and 4 (Rudd & Zemach, 2005), which are equivalent from the purely mathematical standpoint. Equation 4 perhaps best expresses the key principle of the model: namely, that the lightness-inducing effect of a step in log luminance, pointed in the direction of the target disk, is larger when the edge is located closer to the disk than when a step of the same size and contrast polarity is located farther away. The disks in Figure 10 both have the same total step in log luminance from the background field to the disk, but the spatial distribution of this step is different for the left and right disks. The left disk gets the larger part of the total step at the disk/annulus border, where the perceptual influence of edge-based lightness induction is largest. Thus the left disk is predicted by the edge integration model to appear lighter than the right disk, which it does. 
According to the model, such failures of the highest-luminance-as-white rule are intimately related to failures of Wallach's idea that the surface appearance is determined by the surface's luminance ratio with respect to the white surface. In the edge integration model proposed here, the ultimate reference point for lightness scaling is not the white point, but rather the common background—or greater spatial surround—against which the figures are viewed. This follows from the fact that the common background is assumed in the model to be the starting point for the edge integration process. Since scaling depends on computations in which the weights given to edges decline with distance from the target, spatial integration of edge information is imperfect. Different lightness values can be assigned to different highest luminance surfaces within the same image, depending on the scene geometry and the luminances of the surfaces lying between the common background and a given highest luminance surface. The lightness percepts generated by Figure 10 are consistent with this model property. 
The edge integration computation uniquely determines the relative lightnesses of all of the surfaces in the scene. But only after an anchoring principle is applied do actual (perceived) lightness values get assigned to surfaces. The model assumes that the value “white” is assigned to the surface with the highest relative lightness, as determined by the output of the edge integration computation. The value “white” is not assigned to the region with the highest luminance because anchoring occurs at a stage of neural processing at which information about raw luminance has been lost. 
In retinex, the relative lightness scale is the same as the ratio scale of the actual physical reflectances in the mondrian image. This is true because retinex gives weights of 1 to all edges in the edge integration process. As a result of this fact, the lightness assigned to a particular surface is independent of the spatial context in which the surface is viewed. Therefore, a surface will always match in lightness any other surface having the same luminance. An anchoring rule that assigns the value “white” to the surface or surfaces associated with the highest output of the edge integration process will automatically assign the value “white” to the highest luminance surface. This follows from the fact that, in retinex, a surface that produces the highest edge integration output is always a highest luminance surface. 
When retinex's edge integration algorithm—which gives every edge in the image a weight equal to 1—is replaced with an algorithm based on distance-dependent edge integration—the one-to-one correspondence between the highest value of the edge integrator output (i.e., the highest lightness) and the highest image luminance is broken. A given surface's relative positioning on the lightness scale is no longer predicted by the surface's relative positioning on the luminance scale; lightness now also depends on the scene geometry. This is how distance-dependent edge integration explains the observed violations of Type II constancy that include simultaneous contrast and the illusion shown in Figure 10
It may be important to restate here, for emphasis, that the explanation of failures of Type II constancy given by distance-dependent edge integration theory is not identical to the classical explanation based on contrast of the target with respect to its surround. According to distance-dependent edge integration, the target lightness is computed from a weighted spatial sum of log luminance ratios, only the most significant of which corresponds to the local edge between the target and its immediate surround. Gilchrist (1988) proposed an explanation of simultaneous contrast that is qualitatively consistent with this idea before he abandoned the edge integration approach in favor of anchoring theory. The present model places his idea on a more firm mathematical footing. 
The equations that determine the relative lightness scaling in the edge integration model are considerably more complex than the simple ratio scaling rule proposed by Wallach. Given the fact that these equations depend on geometry and image segmentation rules, in addition to image luminances, it seems likely that the lightness scaling rules that apply to real world images—or even to arbitrary gray-level images—will appear even more complex when written in algorithmic form. But the fact that a more complex approach is needed is made clear by an accumulating number of results in the lightness literature that demonstrate clear violations of ratio scaling. The demonstration shown in Figure 10, simultaneous contrast, and Cataliotti and Gilchrist's (1994) compression effect are just a few examples of such ratio scaling violations. 
The current version of the edge integration model incorporates—in addition to the basic assumption of a spatial falloff in the magnitudes of edge weights—ancillary assumptions about the dependence of edge weights on edge contrast polarity and edge integration paths that do and do not contribute to the lightness computation. The important model assumption that only integration paths beginning on the common background make a contribution to edge integration implies that some knowledge of what part of the image constitutes the figure, and what part constitutes ground, must enter into the model calculations. If the theory is correct, then this knowledge must be somehow utilized by the brain to control the weights applied to edges prior to their spatial integration. 
The fact that the model's scaling calculations depend on figure-ground organization is another way in which the edge integration model differs from both retinex and anchoring theory. However, the model in its present form cannot account for several previously demonstrated effects of image segmentation on lightness (e.g., Agostini & Galmonte, 1999, 2002; Agostini & Proffitt, 1993; Benary, 1924; Bressan, 2001; Bressan & Kramer, 2008; Gilchrist, 1977; Laurinen, Olzak, & Peromaa, 1997; Sawayama & Kimura, 2013). So further rules based on image segmentation will likely be needed to specify the way in which the visual system assigns weights to edges. For example, the visual system is likely to also consider depth cues in the process of assigning weights to edges in real world scenes. 
Figure 11 illustrates several additional visual displays that my colleagues and I utilized in previous work to pit edge integration theory against anchoring theory. For each of the displays illustrated, our results supported edge integration theory and failed to support one or more critical predictions of anchoring theory. Some of these results depend on developments of edge integration theory that have been glossed over in the present work in order to simplify the presentation. These include effects of contrast gain control between edges (Rudd, 2010; Rudd & Popa, 2007), the importance of edge co-alignment to lightness induction (Zemach & Rudd, 2007), and top-down modulation of edge weights (Rudd, 2010). The reader is referred to the original papers for details. 
Figure 11
 
Some displays used in previous studies to generate lightness matches at odds with the predictions of anchoring theory. (a) When a decremental disk surrounded is by two concentric annuli and the outer annulus has a lower luminance than the inner annulus, the luminance of the outer annulus affects the disk lightness (Rudd & Zemach, 2004). This result is contrary to the anchoring theory prediction that a lower luminance surround should not affect the lightness of a target. It furthermore shows that the prediction is violated even when the local contrast at the disk/inner annulus border is held constant. (b) When a decremental target disk is surrounded by two concentric annuli and the outer annulus has the highest luminance in the display, the luminance of the inner annulus affects the target lightness even when the luminance of the outer annulus is held constant. A change in the contrast of the disk/inner annulus border modulates the strength of the lightness induction produced in the disk by the outer annulus, an effect known as “blockage” (Rudd, 2001; Rudd & Arrington, 2001). The first result demonstrates that it is not simply the luminance ratio of the disk relative to the highest luminance that determines the disk lightness, contrary to anchoring theory. An explanation of the second effect in terms of edge integration and contrast gain control between edges was proposed by Rudd and Popa (2007). The blockage effect is not predicted by anchoring theory. (c) The effects demonstrated with the stimulus in (b) are also seen when the outer annulus and dark background is replaced by a white background field (Rudd & Zemach, 2007), which is consistent with the explanation in terms of edge integration and contrast gain control proposed by Rudd and Popa and inconsistent with anchoring theory. The strength of the blockage effect is increased when the annulus width is reduced (Rudd, 2010), as predicted by the Rudd-Popa model. (d) Relative to the case of a disk embedded in a homogeneous surround, the disk lightness is not affected by surround articulation if the surround is articulated by carving it up into sectors comprising alternating light and dark wedges, as long as the average luminance of the surround is kept constant. However, if the surround is carved up to create a radial checkerboard pattern having the same average luminance as the original homogeneous surround, then the disk lightness is affected (Zemach & Rudd, 2007). The results are consistent with the idea that lightness varies depending on the proximity of contextual edges that are co-aligned with the target edges. But the results are not consistent with the account of anchoring theory based on a global comparison of the target lightness with the highest luminance. The stimuli in (a), (b), and (c) have been drawn to scale. Panel (d) adapted from Zemach and Rudd (2007); used with permission.
Figure 11
 
Some displays used in previous studies to generate lightness matches at odds with the predictions of anchoring theory. (a) When a decremental disk surrounded is by two concentric annuli and the outer annulus has a lower luminance than the inner annulus, the luminance of the outer annulus affects the disk lightness (Rudd & Zemach, 2004). This result is contrary to the anchoring theory prediction that a lower luminance surround should not affect the lightness of a target. It furthermore shows that the prediction is violated even when the local contrast at the disk/inner annulus border is held constant. (b) When a decremental target disk is surrounded by two concentric annuli and the outer annulus has the highest luminance in the display, the luminance of the inner annulus affects the target lightness even when the luminance of the outer annulus is held constant. A change in the contrast of the disk/inner annulus border modulates the strength of the lightness induction produced in the disk by the outer annulus, an effect known as “blockage” (Rudd, 2001; Rudd & Arrington, 2001). The first result demonstrates that it is not simply the luminance ratio of the disk relative to the highest luminance that determines the disk lightness, contrary to anchoring theory. An explanation of the second effect in terms of edge integration and contrast gain control between edges was proposed by Rudd and Popa (2007). The blockage effect is not predicted by anchoring theory. (c) The effects demonstrated with the stimulus in (b) are also seen when the outer annulus and dark background is replaced by a white background field (Rudd & Zemach, 2007), which is consistent with the explanation in terms of edge integration and contrast gain control proposed by Rudd and Popa and inconsistent with anchoring theory. The strength of the blockage effect is increased when the annulus width is reduced (Rudd, 2010), as predicted by the Rudd-Popa model. (d) Relative to the case of a disk embedded in a homogeneous surround, the disk lightness is not affected by surround articulation if the surround is articulated by carving it up into sectors comprising alternating light and dark wedges, as long as the average luminance of the surround is kept constant. However, if the surround is carved up to create a radial checkerboard pattern having the same average luminance as the original homogeneous surround, then the disk lightness is affected (Zemach & Rudd, 2007). The results are consistent with the idea that lightness varies depending on the proximity of contextual edges that are co-aligned with the target edges. But the results are not consistent with the account of anchoring theory based on a global comparison of the target lightness with the highest luminance. The stimuli in (a), (b), and (c) have been drawn to scale. Panel (d) adapted from Zemach and Rudd (2007); used with permission.
How the model views the relationship between lightness and brightness
Since I have suggested that Stevens' brightness law provides the “Rosetta Stone” for parameterizing the edge integration model, it may be important to further clarify here how the model views the theoretical relationship between lightness and brightness. By definition, brightness is perceived luminance, and lightness is perceived reflectance. Under some circumstances, however, the perceptual distinction of these two different qualia may be subtle, or even nonexistent. This is particularly true in the case of simple nonecological stimuli. For example, simultaneous contrast is sometimes described as a brightness illusion and sometimes as a lightness illusion. 
Arend and Spehar (1993a, 1993b) showed that brightness and lightness judgments can be made independently with simple square-frame stimuli and that, in general, different matches are obtained for the two types of qualia. In a previous paper (Rudd, 2010), I reported the results of experiments in which I had subjects perform both types of judgements with disk-annulus stimuli. I modeled the experimental results with a neural edge integration model. Both types of judgments were explained by a single unitary edge integration mechanism in which the weights associated with the inner and outer edges of the annulus are subject to top-down modulation that the observer intentionally controls to carry out different types of matching tasks. That model is consistent with the one proposed here. But here, the top-down edge weight modulation has been ignored in order to emphasize how the quantitative lightness–darkness asymmetry is explained in the context of edge integration theory. 
In keeping with the model presented in Rudd (2010), the view of the present work is that the quantitative laws governing brightness and lightness have some degree of overlap because the two qualia are computed by the same underlying neural machinery. If the retina exhibits a cube root response to increments but responds linearly to decrements, then brightness and lightness will both exhibit signatures of these different neural responses to luminance increments and decrements. 
However, this does not imply that brightness and lightness are the same thing. The retinal output may be transformed in different ways at the cortical level in the process of computing brightness and lightness percepts, and cognitive factors may enter into the interpretation of the neural image produced by the edge integration mechanism. The model presented in Rudd (2010) proposed that this is accomplished by cognitively informed adjustments of the edge weights in a neural edge integration process. This adjustment was presumed to occur prior to edge integration and before lightness or brightness is computed by task-relevant spatial integration of edge information. 
Perceptually, a small incremental spot viewed in an otherwise dark room is always interpreted as a bright object. But the same spot embedded in the spatial context of a larger scene may be interpreted as a surface, which may appear either white, nonwhite, or self-luminous. In the special case of an isolated light presented in the context of an otherwise dark void, we know that brightness is described by Stevens' power law with an exponent of 1/3 when brightness is measured by the technique of magnitude estimation (Equation 10). With brightness expressed in log units, Stevens' law takes the form of the linear Equation 11, in which the brightness law exponent is expressed as a weight applied to the disk edge. 
Rudd and Popa (2007) proposed an edge integration-based lightness model in which the exponentiation implied in going from Equation 11—the logarithmic version of Stevens' brightness law—to Equation 10—the power law version—is assumed to generalize to the domain of lightness perception. The model predicts that the lightness of an annulus-embedded disk, if measured by the technique of magnitude estimation, will have the exponential form:  which is the exponentiated version of Equation 4. Equation 14 appropriately reduces to Stevens' law (Equation 10) in the special case of a small incremental spot presented in the dark. I am currently performing experiments to test this prediction. 
Rudd and Popa further elaborated Equation 14 to include additional effects of contrast gain control acting between edges (Bindman & Chubb, 2004a, 2004b). Like the top-down modulation of edge weights proposed in my 2010 paper, the contrast gain control mechanism has been ignored here to keep things simple. Nevertheless, it has been shown in a number of previous studies that the model that includes contrast gain control provides a better quantitative fit to the data from experiments performed with disk-annulus stimuli (Rudd, 2001, 2003, 2010; Rudd & Arrington, 2001; Rudd & Popa, 2007; Rudd & Zemach, 2004, 2005, 2007; Vladusich, Lucassen, & Cornelissen, 2007). The results of those studies demonstrated that match disk luminance is actually a parabolic function (i.e., a second order polynomial) of the luminance of the annulus surrounding the target disk, when all display luminances are expressed in log units. For this reason, matching plots such as the ones shown in Figure 3 invariably exhibit some curvature, and this curvature is accounted for by contrast gain control between nearby edges in the full edge integration theory. 
For matching plots presented in Figure 3, the degree of curvature is quite small. Nevertheless, it is statistically significant (Rudd & Zemach, 2004, 2005). This is interpreted in the model to mean that the effects of contrast gain control are weak under the conditions of these experiments and can therefore be ignored for present purposes. 
When lightness is measured with a magnitude estimation technique instead of a matching technique, the complete edge integration theory—that is, the model that includes effects of contrast gain control and top-down modulation of edge weights in addition to the model properties discussed in the previous sections of the present work—predicts a lightness law exponent that is a second-order polynomial function of the contextual luminances (in log units) that surround the target disk in a disk-annulus display (Rudd & Popa, 2007). By extension, it is expected that the lightness function associated with more complex stimuli—what Adelson (2000) referred to as the lightness transfer function—should have an exponential form in which the exponent consists of a sum of terms involving weighted sums of the log luminances, and the squares of the log luminances, of the various image regions within the target's extended spatial surround. Based on estimates of the spatial range of edge integration derived from the matching data of Rudd and Zemach (2004), it is anticipated that weights associated with the various luminances within the surround should fall off with distance from the target, over a span of about 10°. Again, the exponential form of this function is expected to be revealed only when lightness is measured by the magnitude estimation technique. 
Because brightness and lightness are assumed in the model to be computed by the same underlying neural edge integration mechanism, albeit with different choices of top-down edge weights, the model rejects the common assumption that brightness is computed from a neural luminance signal and lightness from a neural contrast signal. According to the model, both brightness and lightness represent the outputs of “high-level” neural representations produced by a physiological process that first analyzes the retinal image by looking for local edge structure, then combines the information from local edges across space to create what is fundamentally an attempt on the part of the visual system to reconstruct, or model, the observer's visual environment. 
Possible neural basis of the computational model
Beginning in the retina, the neural signals supporting vision are segregated into parallel ON and OFF pathways. The potential thus exists for neural responses to increments and decrements to differ in their response gain. A common model of the response of an individual neuron to an input of intensity I is the Naka-Rushton function (Naka & Rushton, 1966):  This function saturates at large values of I and is well-approximated by a power function for values of I sufficiently smaller than the parameter σ. Because the Naka-Rushton function takes the form of a power law for nonsaturating values, it could provide a neural basis for Stevens' law, provided that n = 1/3. To explain the lightness–darkness asymmetry, it would be necessary to assume that the exponent 1/3 applies only to ON cells and that n = 1 in the case of OFF cells. 
Donner (1989) proposed an another retinal account of Stevens' law that explains the law on the basis of a cube-root relationship between the slope of the rising photoreceptor response and the intensity of the input that generates that response. He provided evidence for this cube-root relationship in the retina of the frog Rana temporaria and showed experimentally that the initial ON-ganglion cell spike rate in the frog follows a cube-root law as a consequence of the rising rod response. Donner's results raise the possibility that the lightness–darkness asymmetry in humans might originate at the very first stage of visual processing. To prove that idea, it would be necessary to establish that the photoreceptor response to a decrease in light intensity is linearly related to the intensity decrease in primates, and the OFF ganglion cell response to decrements is consequently linearly related to decremental intensity. 
Converging evidence that the neural response to incremental intensity is proportional to the cube-root of luminance comes from psychophysical experiments demonstrating that both mean reaction time (Luce, 1986; Pieron, 1914) and the critical duration of Bloch's law (Raab, 1962; Rudd, 1996) vary in inversely with the cube root of luminance under photopic conditions. Both findings are consistent with the idea that a neural decision making process integrates a spike train whose rate varies in proportion to the cube-root of luminance to a criterion spike count (Rudd, 1996). 
Even if the quantitative asymmetry between lightness and darkness induction begins in the retina, a cortical theory is still needed to account for long-range spatial context effects, anchoring, and perceptual organization. For example, Zemach and Rudd (2007) showed that articulation effects in lightness depend on the co-alignment of the target edges with edges within the target's surround. Cortical mechanisms are needed to explain this result because the first stage of visual processing at which oriented contrast is neurally represented is in the population of simple cells within area V1. 
Edges with opposite contrast polarities excite different subpopulations of simple cells whose peak orientation sensitivities differ by 180°. Thus, simple cell responses to edges of opposite contrast polarities could potentially have different gain factors. These gain factors could, in turn, be inherited from the neurons in the retina or lateral geniculate nucleus (LGN). The “standard” computational model of a simple cell consists of a linear spatial image filter (Daugman, 1985; Jones & Palmer, 1987a, 1987b; Ringach, 2002), or computation of local Michelson contrast (Dean, 1981; Tolhurst, 1983), followed by a threshold nonlinearity (Carandini, 2006; Movshon, Thompson, & Tolhurst, 1978). Spikes are elicited when the filter output reaches a criterion level. Thus, to first degree of approximation, simple cells act as half-wave rectifiers of their spatially filtered input. 
It is not clear if this standard model can be reconciled with the results of Donner (1989) indicating that the cube-root response to luminance is already present in the photoreceptor rising phase because a cube-root response to luminance is fundamentally at odds with the linear filter assumption of the standard simple cell model. Perhaps amphibians and primates differ in terms of the neural mechanisms by which they encode brightness, but the photoreceptor rising response in frogs was shown by Donner to provide a possible explanation of the psychophysical brightness response in humans. The difficulty in reconciling the retinal data with the cortical model provides a motive for considering ways in which the standard model might be wrong. An alternative possibility is that there might be two distinct simple cell subclasses that are driven to fire by an edge of a particular contrast polarity. One class responds to ON cell excitation on one side of the edge and the other to OFF excitation on the other side of the edge. The firing rate of the first class would be proportional to the cube-root of incremental luminance that causes the ON cells to fire, while the firing rate of the second class would be linearly proportional to the decremental luminance that causes the OFF cells to fire. 
Another nonstandard simple cell model—one that has already been proposed in the literature—assumes that simple cells respond linearly to spatial steps in log luminance across the receptive field (Kinoshita & Komatsu, 2001; Rudd, 2010; Vladusich, Lucassen, & Cornelissen, 2006a). This model could also be modified to account for the lightness-darkness asymmetry, in this case by assuming that there are two subpopulations of simple cells that fire either to ON or OFF cell stimulation. The gain factor of the first class would be 1/3 as large as that of the second class. This gain difference could originate from the fact that ON cell responses are proportional to the cube-root of incremental luminance, whereas OFF cell responses are directly proportional to decremental luminance. A logarithmic transformation occurring along the somewhere pathway from the retina to simple cells would then transform the exponents of the power law responses of ON and OFF cells into multiplicative gain factors at the simple cell level. 
To this point, the discussion of lightness computation in the present work has focused exclusively on stimuli possessing hard edges. But the concept of an “edge” can be generalized to refer to any oriented image contrast element that elicits a response from a simple cell having an asymmetric oriented receptive field (Rudd, 2010; Rudd & Zemach, 2004, 2005). Since simple cell receptive fields exist at many different spatial scales, orientations, and retinal locations, the population of simple cells—considered as a whole—encode graded “edges,” thus defined, at many spatial frequencies, orientations, and locations. If it is true that simple cells respond linearly to spatial differences in log luminance, then the simple cell population computes steps in log luminance for image features spanning the entire range of orientations and frequencies represented by the population. Thus, the second nonstandard simple cell model described above is not limited to processing stimuli that contain only hard edges, but rather provides a basis for computing lightness from arbitrary gray-scale images. 
To neurally implement the edge integration model proposed in earlier sections of this paper, all that would be required at a stage of cortical processing subsequent to V1 would be a long-range spatial summation that appropriately combines across space the outputs of simple cells responding to edges of different contrast polarities at different locations. I have previously proposed that this spatial summation is carried out by separate populations of “lightness” and “darkness” neurons possessing large receptive fields (Rudd, 2010), as illustrated in Figure 12
Figure 12
 
Model of lightness computation in the ventral stream of visual cortex. The achromatic color-encoding properties of lightness and darkness neurons in area V4 are explained by a mechanism whereby the large-scale receptive fields of these neurons spatially integrate the outputs of V1 neurons (transmitted via V2) that encode oriented contrast and thus respond to edges having a particular contrast polarity. (a) Lightness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose light sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of ON cell, versus OFF cell, stimulation in the retina and LGN. (b) Darkness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose dark sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of OFF cell, versus ON cell, stimulation in the retina and LGN. A further elaborated version of this cortical edge integration model that also incorporates effects of contrast gain control between edges and top-down control of edge weights can be found in Rudd (2010).
Figure 12
 
Model of lightness computation in the ventral stream of visual cortex. The achromatic color-encoding properties of lightness and darkness neurons in area V4 are explained by a mechanism whereby the large-scale receptive fields of these neurons spatially integrate the outputs of V1 neurons (transmitted via V2) that encode oriented contrast and thus respond to edges having a particular contrast polarity. (a) Lightness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose light sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of ON cell, versus OFF cell, stimulation in the retina and LGN. (b) Darkness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose dark sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of OFF cell, versus ON cell, stimulation in the retina and LGN. A further elaborated version of this cortical edge integration model that also incorporates effects of contrast gain control between edges and top-down control of edge weights can be found in Rudd (2010).
Several lines of evidence suggest that a cortical mechanism having the right properties to instantiate this long-range edge integration operation might be located in cortical area V4. First, from perceptual experiments it is known that visual completion mechanisms associated with illusory contour formation and border ownership can influence lightness (Dresp, Lorenceau, & Bonnet, 1990), and area V2 is the first cortical stage at which these completion processes are robustly represented (Craft, Schutze, Niebur, & von der Heydt, 2007; Peterhans & von der Heydt, 1989, 1991; Peterhans, von der Heydt, & Baumgartner, 1986; von der Heydt & Peterhans, 1989; von der Heydt, Peterhans, & Baumgartner, 1984; von der Heydt, Zhou, & Friedman, 2003; Zhou, Friedman, & von der Heydt, 2000). Therefore, an edge integration mechanism of the type illustrated in Figure 12 would have to be located within or beyond V2. V4 receives direct input from both V2 and V1 (Zeki, 1993) and is thus the first cortical processing locus at which lightness could be computed in a strictly feedforward manner. 
Evidence that V4 is the site of a long-range process that is responsible for at least some effects of spatial context on lightness comes from patients with neurological damage to V4. Such patients exhibit color constancy deficits, while retaining their ability to make local wavelength discriminations (Bartels & Zeki, 2000; Clarke, Walsh, Schoppig, Assal, & Cowey, 1998; Kennard, Lawden, Morland, & Ruddock, 1995; Kentridge, Heywood, & Cowey, 2004; Smithson, 2005; Walsh, 1999; Zeki, Aglioti, McKeefry, & Berlucchi, 1999; Zeki & Marini, 1998). The nature of these clinical deficits led Zeki and Marini (1998) to speculate that V4 is the locus of a “retinex-like” color constancy computation. I propose here that this retinex-like computation is actually the lightness computation described in the earlier sections of this paper, and that the final, edge integration, step of the computation is performed by lightness and darkness encoding neurons in V4, according to the scheme illustrated in Figure 12
Bushnell et al. (2011) examined the color sensitivities of V4 neurons using single cell recordings and classified the responses into subtypes. Their subtypes included two neural subpopulations that the authors referred to as “brightness” and “darkness” neurons. Brightness neurons responded only to stimuli that were luminance increments with respect to the background on which they were presented, whereas darkness neurons responded only to stimuli that were luminance decrements. In principle, these “brightness” and “darkness” neurons could derive their properties by combining the outputs of polarity-tuned edges responses generated in V1 or V2, according to the scheme illustrated in Figure 12. In that case, they would also instantiate the edge integration computation required by the lightness computation model. Consistent with this idea, V4 receptive fields are about the right size to explain the fact that perceptual edge integration occurs perceptually over a range of about 10°, as discussed above. The neurons classified by Bushnell et al. as “equiluminant” cells could similarly integrate edges defined by chromatic differences. In that case, V4 would instantiate a retinex-like color computation within multiple dimensions of color space. 
The neural edge integration mechanism illustrated in Figure 12 also provides at least a partial explanation of lightness and—by extension to other color dimensions—color filling-in phenomena, in which induction effects produced by lightness or color contrast at borders spread spatially to color in regions lying between borders. Well known examples include the Craik-O'Brien-Cornsweet (Cornsweet, 1970; Craik, 1966; O'Brien, 1958) and watercolor (Pinna, Brelstaff, & Spillmann, 2001) effects. The edge integration mechanism predicts the existence of such filling-in because the large receptive fields of the model lightness and darkness neurons will produce achromatic color responses at a distance from a lightness- or darkness-inducing edge. When two or more edges are properly aligned within the model neuronal receptive field, the region lying between the borders can even be homogeneously filled in with a particular shade of gray, provided that the receptive field shape of the lightness or darkness neuron is right (Rudd, 2010). 
It may be important to draw attention to the fact that the lightness and darkness neurons in the model do not act simply as large-scale spatial filters in the manner of, say, the lowest spatial frequency filters in the ODOG lightness model (e.g., Blakeslee, Reetz, & McCourt, 2009). For one thing, the large receptive fields do not filter the image directly; instead they integrate polarity-sensitive edge signals generated at earlier neural processing stages. To account for the lightness–darkness asymmetry, the early-stage edge signals are assumed to be differentially weighted prior to long-range spatial integration in V4, according to whether the edge step to which the early-stage edge detectors respond is an increment or a decrement relative to the location of lightness or darkness neuron's receptive field's center. The edge contrast polarities that are selected for integration by a given V4 neuron differ on the two sides of the neuron's receptive field. A luminance step having the wrong polarity would thus not be “seen” by the model V4 neuron. For these reasons, the lightness and darkness neurons in the model behave much differently than the large-scale linear filters in models such as ODOG. 
An even more significant departure from low-level filter models of lightness computation arises from the fact that the lightness and darkness neurons are proposed to be part of a larger cortical circuit that instantiates the edge integration model proposed in earlier sections of this paper. That model assumes that figural organization plays a role in selecting edges for spatial integration. The only edges that get integrated are ones lying along a direct path from a common background to the target (viz., Figure 9). If V4 is the neural site of edge integration, then there must be some neural machinery within the cortical pathway prior to V4 that selects edges for spatial integration as required by the edge integration model. This additional machinery must be sophisticated enough to know about figure-ground organization and to perform edge-weighting computations informed by image segmentation cues. We know from the psychological work of von der Heydt and colleagues (Craft, Schutze, Niebur, & von der Heydt, 2007; Peterhans & von der Heydt, 1989, 1991; Peterhans, von der Heydt, & Baumgartner, 1986; von der Heydt & Peterhans, 1989; von der Heydt, Peterhans, & Baumgartner, 1984; von der Heydt, Zhou, & Friedman, 2003; Zhou, Friedman, & von der Heydt, 2000) that at least some of the required computations (e.g., computations related to border ownership) are carried out by circuits in V2. It is thus plausible to imagine that these circuits are sophisticated enough to adjust the neural gains of edge signals, prior to their spatial integration in V4, in a manner that embodies the computational principles required by the edge integration model. Exactly how this is accomplished is a problem for future research. 
In the proposed cortical model, the mechanisms responsible for lightness filling-in are the same mechanisms that perform edge integration and lightness computation, more generally. Filling-in is thus as “high level” a phenomenon as color computation itself. Principles of figural organization embodied by neural processes in V2 play a role in determining how the regions lying between borders get filled in with achromatic or chromatic color. Only after such preprocessing of edge signals has occurred is lightness filling-in accomplished in V4 by the feedforward edge integration architecture illustrated in Figure 12
The hypothesis that color filling-in is produced by a feedforward process that maps edge signals from V1 V2 V4 is thus a plausible alternative to previously proposed filling-in models in which color signals diffuse from edges to fill in regions between edges in a spatiotopic cortical map (e.g., Cohen & Grossberg, 1984; Fang & Grossberg, 2009; Grossberg, 1987a, 1987b, 2003; Grossberg & Mingolla, 1985a, 1985b; Grossberg & Todorović, 1988; see also Meng, Remus, & Tong, 2005; Sasaki & Watanabe, 2004). The current generation of diffusion-based filling-in models can explain a host of perceptual completion phenomena, as well as other aspects of image segmentation that are not addressed by the present model. Nevertheless, the present model incorporates quantitative principles and operations (e.g., edge integration) that have not been addressed by diffusive filling-in models. In fact, it may be difficult, if not impossible, to carry out some of the operations proposed here in the context of a diffusion-based model. In particular, edge integration requires that steps in log luminance be summed across space, whereas all extant diffusive filling-in theories assume that diffusing color signals stop when they encounter the next border. 
Summary and conclusions: Neural lightness and color computation considered as a process of cortical surface representation
In previous work, lightness matches made with disk-annulus stimuli were shown to be consistent with a lightness computation model based on the principle of distant-dependent edge integration (Reid & Shapley, 1988; Rudd, 2001, 2003, 2007, 2010; Rudd & Popa, 2007; Rudd & Zemach, 2004, 2005, 2007; Shapley & Reid, 1985; Vladusich, Lucassen, & Cornelissen, 2006b, 2007; Zemach & Rudd, 2007). Here, the idea that lightness computation in human vision is produced by a neural mechanism involving distance-dependent edge integration has been further supported by the results of a new experiment in which the luminance of a remote background field was manipulated to influence the lightness of a target square. Reversing the contrast polarity of the edge between a frame that immediately surrounded the target and the remote background field altered the strength of lightness induction produced by the remote field. 
This polarity-specific remote edge-based induction effect cannot be explained by any theory of lightness computation in which lightness depends on a direct, long-range, visual comparison of the target's luminance with the luminance of the noncontiguous field. If such a theory was correct, then lightness would depend on the luminance polarity of the target vis-à-vis the noncontiguous field—assuming that there was any effect of polarity on induction strength at all—but it does not. For this reason, it cannot be explained by either of two leading theories of lightness computation: Gilchrist's anchoring theory and Bressan's double-anchoring theory (Bressan, 2006a, 2006b). In other words, the new experimental results are powerful enough to rule out a large class of otherwise plausible lightness models. 
On the other hand, the new results are consistent with a distance-dependent edge integration theory in which the distance between the target and an edge combines with the edge contrast polarity to determine the overall contribution of the luminance step at the edge to the target lightness. The two factors—edge distance and edge contrast polarity—have been assumed here to make independent contributions to the weight given to an edge in computing lightness, although this assumption needs to be more thoroughly tested. A version of this two-factor edge integration model in which the magnitude of the weight associated with an edge is 1/3 when the light side of the edge points towards the target, and 1 when the dark side of the edge points towards the target, was shown to give a good quantitative account of the results of several lightness matching studies, including studies using both disk-annulus and staircase Gelb stimuli as displays. 
It has been further hypothesized here that the asymmetry in the induction strengths associated with lightness- and darkness-inducing edges arises from the same neural mechanism responsible for the power law relationship between brightness and luminance (Stevens' brightness law). On the basis of this hypothesis, a similar power law was predicted to govern the relationship between the strength of induced darkness and decremental luminance. The exponent of the darkness law is predicted to be 1; that is, the relationship between induced darkness and decremental intensity should be linear. 
A two-factor edge integration theory in which lightness and darkness induction strengths are characterized by edge-based induction strengths of 1/3 and 1, prior to distance-dependent weighting, can account for lightness matches made to the papers in a staircase Gelb display (Cataliotti & Gilchrist, 1995) only if it is additionally assumed that the only edges contributing to lightness computation are ones lying along a direct path from the common background to the target paper whose lightness is computed. Thus, the model requires that figure-ground segregation precede the computation of lightness in the visual processing stream. This is a novel idea in the realm of edge integration models of lightness, including machine vision models descended from Land and McCann's original retinex theory (McCann & Rizzi, 2012), but it is consistent with experimental data demonstrating effects of figural organization on lightness and color (e.g., Benary, 1924), in addition to recent findings from the cortical color neurophysiology literature. 
The computations that form the algorithmic basis of the lightness model are likely carried out in the brain by a feedfoward cortical circuit in which processes related to figural organization occur early—involving, but not limited to, area V2—and color computation occurs late—within or beyond V4. A theory of how these computations are neurally instantiated has been outlined. According to the neural theory, the edge integration step of the computation is performed by large-scale receptive fields in V4. The same V4 neurons that perform edge integration in order to compute color are also hypothesized to be responsible for color filling-in phenomena such as the Craik-O'Brien-Cornsweet and watercolor effects (Rudd, 2010). Such filling-in phenomena have often been cited as evidence for the special importance of edges in color induction. But the connection made here between filling-in and the need to spatially integrate steps in log luminance in order to construct a cortical representation of surface reflectance is unique to the present theory. 
It is an interesting and open question why edge integration may have evolved as the preferred biological mechanism for representing surface information. Several authors have proposed that filling-in is needed in order to reconstruct information that is lost from the visual image as a result of the retinal blind spot, retinal or cortical scotomas, or the obstruction of light by retinal blood vessels (Cohen & Grossberg, 1984; Gerrits & Vendrik, 1970; Komatsu, Kinoshita, & Murakami, 2000; Ramachandran & Gregory, 1991). This argument also applies to the current model, in which neural surface representations are constructed from information about luminance steps at surface borders. An additional evolutionary pressure on the system architecture may have been engendered by the need to conserve metabolic resources that would be wasted if pointwise image luminances were represented at many levels of the visual hierarchy. 
In principle, a scheme for representing surface reflectance that begins by encoding local steps in log luminance and then perfectly reintegrates these steps at a later stage is capable of representing the same information that would be represented by a pixel-by-pixel encoding scheme. The model of neural surface representation proposed here does not perfectly reintegrate all luminance steps present in the input image. Nevertheless, it may have functional advantages over alternative computational schemes that do perfectly reconstruct the input. In particular, because cortical computations related to figural organization are assumed in the model to occur prior to color computation, the model allows for neural pattern recognition mechanisms to play a role in constructing visual surface representations via shape- or border-defined gain modulations that, in turn, can influence the representations that get “filled in.” The model thus includes a mechanism by which past learning can inform lightness judgements—and object representations more generally—and help to disambiguate potentially sparse sensory data through the use of statistically reliable perceptual heuristics. Furthermore, by including a top-down mechanism that allows for intentional selection of the edge steps to be integrated in any given color computation, the model enables cortical resources to be utilized to best advantage for the visual task at hand (Rudd, 2010). 
The view of the present theory is that these functions are carried out by a combination of unconscious and unconscious mechanisms prior to the generation of the visual representation that appears to the mind's eye. This phenomenal representation expresses the brain's interpretation of the visual scene, formed from a combination of bottom-up sensory data and top-down task-dependent weightings of the sensory data. 
Acknowledgments
This work was partially supported by a grant from the University of Washington Royalty Research Fund to Michael Rudd. The author wishes to thank Jonathan An, Karl Arrington, Erin Harley, Dorin Popa, Amanda Heredia-Montesinos, Heather Patterson, Davida Teller, and Iris Zemach for their contributions to and feedback on the earlier work from which the current theoretical synthesis is derived. 
Commercial relationships: none. 
Corresponding author: Michael E. Rudd. 
Email: mrudd@u.washington.edu. 
Address: Howard Hughes Medical Institute, Department of Physiology and Biophysics, University of Washington School of Medicine, Seattle, WA. 
References
Adelson E. H. (2000). Lightness perception and lightness illusions. In Gazzaniga M. (Ed.), The new cognitive neurosciences (2nd ed., pp. 339–351). Cambridge, MA: MIT Press.
Agostini T. Bruno N. (1996). Lightness contrast in CRT and paper-and-illuminant displays. Perception and Psychophysics, 58, 250–258. [CrossRef] [PubMed]
Agostini T. Galmonte A. (2002). Perceptual organization overcomes the effects of local surround in determining simultaneous lightness contrast. Psychological Science, 13, 89–93. [CrossRef] [PubMed]
Agostini T. Galmonte A. (1999). Spatial articulation affects lightness. Perception and Psychophysics, 61, 1345–1355. [CrossRef] [PubMed]
Agostini T. Proffitt D. R. (1993). Perceptual organization evokes simultaneous lightness contrast. Perception, 22, 263–272. [CrossRef] [PubMed]
Arend L. E. Spehar B. (1993a). Lightness, brightness and brightness contrast: I. Illumination variation. Perception and Psychophysics, 54, 446–456. [PubMed] [CrossRef]
Arend L. E. Spehar B. (1993b). Lightness, brightness and brightness contrast: II. Reflectance variation. Perception and Psychophysics, 54, 457–468. [PubMed] [CrossRef]
Barlow H. B. (1953). Summation and inhibition in the frog's retina. Journal of Physiology, 119, 69–88. [CrossRef] [PubMed]
Bartels A. Zeki S. (2000). The architecture of the colour centre in the human visual brain: New results and a review. European Journal of Neuroscience, 12, 172–190. [CrossRef] [PubMed]
Benary V. Y. (1924). Beobachtungen zu einem Experiment tiber Hellig-keitskontrast [Observation on a brightness contrast experiment]. Psychologische Forschung, 5, 131–142. [CrossRef]
Bindman D. Chubb C. (2004a). Brightness assimilation in bullseye displays. Vision Research, 44, 309–319. [CrossRef]
Bindman D. Chubb C. (2004b). Mechanisms of contrast induction in homogeneous displays. Vision Research, 44, 1601–1613. [CrossRef]
Blakeslee B. Reetz D. McCourt M. E. (2009). Spatial filtering versus anchoring accounts of brightness/lightness in staircase and simultaneous brightness/lightness contrast stimuli. Journal of Vision, 9 (3): 22, 1–17, http://journalofvision.org/9/3/22/, doi:10.1167/9.3.22. [PubMed] [Article] [CrossRef] [PubMed]
Bonato F. Gilchrist A. L. (1999). Perceived area and the luminosity threshold. Perception and Psychophysics, 61, 786–797. [CrossRef] [PubMed]
Bonato F. Gilchrist A. L. (1994). The perception of luminosity on different backgrounds and in different illuminations. Perception, 23, 991–1006. [CrossRef] [PubMed]
Brainard D. H. (2004). Color constancy. In Chalupa L. Werner J. (Eds.), The visual neurosciences (vol. 1, pp. 948–961). Cambridge, MA: MIT Press.
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 233–236. [CrossRef]
Bressan P. (2001). Explaining lightness illusions. Perception, 30, 1031–1046. [CrossRef] [PubMed]
Bressan P. (2006a). Inhomogeneous surrounds, conflicting frameworks, and the double-anchoring theory of lightness. Psychonomic Bulletin and Review, 13, 22–32. [CrossRef]
Bressan P. (2006b). The place of white in a world of grays: A double-anchoring theory of lightness perception. Psychological Review, 113, 526–553. [CrossRef]
Bressan P. Kramer P. (2008). Gating of remote effects on lightness. Journal of Vision, 8 (2): 16, 1–8, http://journalofvision.org/8/2/16/, doi:10.1167/8.2.16. [PubMed] [Article] [CrossRef] [PubMed]
Bushnell B. N. Harding P. J. Kosai Y. Bair W. Pasupathy A. (2011). Equiluminance cells in visual cortical area V4. Journal of Neuroscience, 31 (35), 12398–12412. [CrossRef] [PubMed]
Carandini M. (2006). What simple and complex cells compute. Journal of Physiology (London), 577, 463–466. [CrossRef]
Cataliotti J. Gilchrist A. L. (1995). Local and global processes in lightness perception. Perception and Psychophysics, 57, 125–135. [CrossRef] [PubMed]
Chevreul M. E. (1839/1967). The principles of harmony and contrast of colors and their applications to the arts. New York: Van Nostrand Reinhold.
Clarke S. Walsh V. Schoppig A. Assal G. Cowey A. (1998). Colour constancy impairments in patients with lesions of the prestriate cortex. Experimental Brain Research, 123, 154–158. [CrossRef] [PubMed]
Cohen M. A. Grossberg S. (1984). Neural dynamics of brightness perception: Features, boundaries, diffusion, and resonance. Perception & Psychophysics, 36, 428–456. [CrossRef] [PubMed]
Cornsweet T. N. (1970). Visual perception. New York: Academic Press.
Craft E. Schutze H. Niebur E. von der Heydt R. (2007). A neural model of figure-ground organization. Journal of Neurophysiology, 97, 4310–4326. [CrossRef] [PubMed]
Craik K. J. W. (1966). The nature of psychology: A selection of papers, essays, and other writings by the late Kenneth J. W. Craik. Sherwood S. L. (Ed.). Cambridge, UK: Cambridge University Press.
Daugman J. G. (1985). Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 1, 1160–1169. [CrossRef]
Dean A. F. (1981). The variability in discharge of simple cells in cat visual cortex. Experimental Brain Research, 44, 437–450. [CrossRef] [PubMed]
Diamond A. L. (1953). Foveal simultaneous brightness contrast as a function of inducing- and test-field luminances. Journal of Experimental Psychology, 45, 304–314. [PubMed] [CrossRef] [PubMed]
Donner K. (1989). Visual latency and brightness: An interpretation based on the responses of rods and ganglion cells in the frog retina. Visual Neuroscience, 3, 39–51. [CrossRef] [PubMed]
Dresp B. Lorenceau J. Bonnet C. (1990). Apparent brightness enhancement in the Kanizsa square with and without illusory contour formation. Perception, 19, 483–489. [CrossRef] [PubMed]
Economou E. Zdravković S. Gilchrist A. (2007). Anchoring versus spatial filtering accounts of simultaneous lightness contrast. Journal of Vision, 7 (12): 2, 1–15, http://journalofvision.org/7/12/2/, doi:10.1167/7.12.2. [PubMed] [Article] [CrossRef] [PubMed]
Fang L. Grossberg S. (2009). From stereogram to surface: How the brain sees the world in depth. Spatial Vision, 22, 45–82. [CrossRef] [PubMed]
Gelb A. (1929). Die ‘Farbenkonstanz' der Sehdinge. In von Bethe W. A. (Ed.), Handbuch der Normal und Pathologische Psychologie (Vol. 12, pp. 594–678). Berlin: Springer.
Gerrits H. J. Vendrik A. J. (1970). Simultaneous contrast, filling-in process and information processing in man's visual system. Experimental Brain Research, 11, 411–430. [CrossRef] [PubMed]
Gilchrist A. (2006). Seeing black and white. New York: Oxford University Press.
Gilchrist A. Cataliotti J. (1994). Anchoring of surface lightness with multiple illumination levels. Investigative Ophthalmology & Visual Science, 35, S2165.
Gilchrist A. Kossyfidis C. Bonato F. Agostini T. Cataliotti J. Li X. (1999). An anchoring theory of lightness perception. Psychological Review, 106, 795–834. [CrossRef] [PubMed]
Gilchrist A. L. (1988). Lightness contrast and failures of contrast: A common explanation. Perception & Psychophysics, 43, 415–424. [CrossRef] [PubMed]
Gilchrist A. L. (1977). Perceived lightness depends on perceived spatial arrangement. Science, 195, 185–187. [CrossRef] [PubMed]
Gilchrist A. L. Radonjić A. (2009). Anchoring of lightness values by relative luminance and relative area. Journal of Vision, 9 (9): 13, 1–10, http://journalofvision.org/9/9/13/, doi:10.1167/9.9.13. [PubMed] [Article] [CrossRef] [PubMed]
Gilchrist A. L. Radonjić A. (2010). Functional frameworks of illumination revealed by probe disk technique. Journal of Vision, 10 (5): 6, 1–12, http://journalofvision.org/10/5/6, doi:10.1167/10.5.6. [PubMed] [Article] [CrossRef] [PubMed]
Grossberg S. (1987a). Cortical dynamics of three-dimensional form, color, and brightness perception, I: Monocular theory. Perception and Psychophysics, 41, 87–116. [CrossRef]
Grossberg S. (1987b). Cortical dynamics of three-dimensional form, color, and brightness perception, II: Binocular theory. Perception and Psychophysics, 41, 117–158. [CrossRef]
Grossberg S. (2003). Filling-in the forms: Surface and boundary interactions in visual cortex. In Pessoa L. DeWeerd P. (Eds.), Filling-in: From perceptual completion to skill learning (pp. 13–37). New York: Oxford University Press.
Grossberg S. Mingolla E. (1985a). Neural dynamics of form perception: Boundary completion, illusory figures, and neon color spreading. Psychological Review, 92, 173–211. [CrossRef]
Grossberg S. Mingolla E. (1985b). Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Perception and Psychophysics, 38, 141–171. [CrossRef]
Grossberg S. Todorović D. (1988). Neural dynamics of 1-D and 2-D brightness perception: A unified model of classical and recent phenomena. Perception and Psychophysics, 43, 241–277. [CrossRef] [PubMed]
Heinemann E. G. (1972). Simultaneous brightness induction. In Jameson D. Hurvich L. (Eds.), Handbook of sensory physiology (vol. VII/4, pp. 146–169). Berlin: Springer.
Heinemann E. G. (1955). Simultaneous brightness induction as a function of inducing- and test-field luminances. Journal of Experimental Psychology, 50, 89–96. [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1962). Receptive fields, binocular interaction, and functional architecture of monkey striate cortex. Journal of Physiology (London), 160, 106–154. [CrossRef]
Hubel D. H. Wiesel T. N. (1959). Receptive fields of single neurons in the cat's striate cortex. Journal of Physiology (London), 148, 574–591. [CrossRef]
Jacobsen A. Gilchrist A. (1988). Hess and Pretori revisited: Resolution of some old contradictions. Perception and Psychophysics, 43, 7–14. [CrossRef] [PubMed]
Jones J. P. Palmer L. A. (1987a). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1233–1258.
Jones J. P. Palmer L. A. (1987b). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1187–1211.
Kennard C. Lawden M. Morland A. B. Ruddock K. H. (1995). Color discrimination and color constancy are impaired in a patient with incomplete achromatopsia associated with prestriate cortical-lesions. Proceedings of the Royal Society (London) B, 206, 169–175. [CrossRef]
Kentridge R. W. Heywood C. A. Cowey A. (2004). Chromatic edges, surfaces and constancies in cerebral achromatopsia. Neuropsychologia, 42, 821–830. [CrossRef] [PubMed]
Kingdom F. A. A. (2011). Lightness, brightness and transparency: A quarter century of new ideas, captivating demonstrations and unrelenting controversy. Vision Research, 51, 652–673. [CrossRef] [PubMed]
Kinoshita M. Komatsu H. (2001). Neural representation of the luminance and brightness of a uniform surface in the macaque primary visual cortex. Journal of Neurophysiology, 86, 2559–2570. [PubMed]
Koffka K. (1935). Principles of gestalt psychology. New York: Harcourt, Brace, and World.
Komatsu H. Kinoshita M. Murakami I. (2000). Neural responses in the retinotopic representation of the blind spot in the macaque V1 to stimuli for perceptual filling-in. Journal of Neuroscience, 20, 9310–9319. [PubMed]
Kozaki A. (1965). The effect of co-existent stimuli other than the test stimulus on brightness constancy. Japanese Psychological Research, 7, 138–147.
Kozaki A. (1963). A further study in the relationship between brightness constancy and contrast. Japanese Psychological Research, 5, 129–136.
Kuffler S. W. (1953). Discharge patterns and functional organization of mammalian retina. Journal of Neurophysiology, 16, 37–68. [PubMed]
Land E. H. (1986a). An alternative technique for the computation of the designator in the retinex theory of color vision. Proceedings of the National Academy of Sciences, USA, 83, 3078–3080. [CrossRef]
Land E. H. (1986b). Recent advances in retinex theory. Vision Research, 26, 7–21. [CrossRef]
Land E. H. (1983). Recent advances in retinex theory and some implications for cortical computations: Color vision and the natural image. Proceedings of the National Academy of Sciences, USA, 80, 5163–5169. [CrossRef]
Land E. H. (1964). The retinex. American Scientist, 52, 247–253, 255–264.
Land E. H. (1977). The retinex theory of color vision. Scientific American, 237, 108–128. [CrossRef] [PubMed]
Land E. H. McCann J. J. (1971). The retinex theory of vision. Journal of the Optical Society of America, 61, 1–11. [CrossRef] [PubMed]
Laurinen P. I. Olzak L. A. Peromaa T. (1997). Early cortical influences in object segregation and the perception of surface lightness. Psychological Science, 8, 386–390. [CrossRef]
Li X. Gilchrist A. (1999). Relative area and relative luminance combine to anchor surface lightness values. Perception and Psychophysics, 61, 771–785. [CrossRef] [PubMed]
Luce R. D. (1986). Response times. New York: Oxford University Press.
Marks L. E. (1977). Scales of sensation: Prolegomena to any future psychophysics that will be able to come forth as science. Perception and Psychophysics, 16, 358–376. [CrossRef]
Marks L. E. (1974). Sensory processes: The new psychophysics. New York: Academic Press.
McCann J. J. Rizzi A. (2012). The art and science of HDR imaging. West Sussex, UK: Wiley and Sons.
Meng M. Remus D. A. Tong F. (2005). Filling-in of visual phantoms in the human brain. Nature Neuroscience, 8 (9), 1248–1254. [CrossRef] [PubMed]
Movshon J. A. Thompson I. D. Tolhurst D. J. (1978). Spatial summation in the receptive fields of simple cells in the cat's striate cortex. Journal of Physiology (London), 283, 53–77. [CrossRef]
Munsell A. H. (1905). A color notation. Boston: G. H. Ellis Co.
Naka K. I. Rushton W.A. (1966). S-potentials from colour units in the retina of fish (Cyprinidae). Journal of Physiology, 185, 536–555. [CrossRef] [PubMed]
Newhall S. M. Nickerson D. Judd D. B. (1943). Final report of the O.S.A. subcommittee on the spacing of the Munsell colors. Journal of the Optical Society of America, 33, 385–418. [CrossRef]
O'Brien V. (1958). Contour perception, illusion, and reality. Journal of the Optical Society of America, 48, 112–119. [CrossRef]
Onley J. W. (1961). Light adaptation and the brightness of brief foveal stimuli. Journal of the Optical Society of America, 51, 667–673. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Peterhans E. von der Heydt R. (1989). Mechanisms of contour perception in monkey visual cortex. II. Contours bridging gaps. Journal of Neuroscience, 9, 1749–1763. [PubMed]
Peterhans E. von der Heydt R. (1991). Subjective contours: Bridging the gap between psychophysics and physiology. Trends in Neurosciences, 14, 112–119. [CrossRef] [PubMed]
Peterhans E. von der Heydt R. Baumgartner G. (1986). Neuronal responses to illusory contour stimuli reveal stages of visual cortical processing. In Pettigrew J. D. Sanderson K. J. Levick W. R. (Eds.), Visual neuroscience (pp. 343–351). Cambridge, UK: Cambridge University Press.
Pieron H. (1914). Recherches sur les lois de variation des temps de latence sensorielle en function des intensités excitatrices. Année Psychologie, 20, 17–96. [CrossRef]
Pinna B. Brelstaff G. Spillmann L. (2001). Surface color from boundaries: A new ‘watercolor' illusion. Vision Research, 41, 2669–2676. [CrossRef] [PubMed]
Raab D. (1962). Magnitude estimation of the brightness of brief foveal stimuli. Science, 135, 42–43. [CrossRef] [PubMed]
Ramachandran V. S. Gregory R. L. (1991). Perceptual filling in of artificially induced scotomas in human vision. Nature, 350, 699–702. [CrossRef] [PubMed]
Reid R. C. Shapley R. (1988). Brightness induction by local contrast and the spatial dependence of assimilation. Vision Research, 28, 115–132. [CrossRef] [PubMed]
Ringach D. L. (2002). Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88, 455–463. [PubMed]
Rudd M. E. (2007). Metacontrast masking and the cortical representation of surface color: Dynamical aspects of edge integration and contrast gain control. Advances in Cognitive Psychology, 3, 327–347. [CrossRef]
Rudd M. E. (2009). An edge-based account of lightness compression and insulation in the staircase Gelb effect. Journal of Vision, 9 (8): 361, http://www.journalofvision.org/content/9/8/361, doi:10.1167/9.8.361. [Abstract] [CrossRef]
Rudd M. E. (2010). How attention and contrast gain control interact to regulate lightness contrast and assimilation. Journal of Vision, 10 (14): 40, http://www.journalofvision.org/content/10/14/40, doi:10.1167/10.14.40. [PubMed] [Article] [CrossRef] [PubMed]
Rudd M. E. (2001). Lightness computation by a neural filling-in mechanism. Proceedings of the Society of Photo-Optical Engineers, 4299, 400–413.
Rudd M. E. (1996). A neural timing model of visual threshold. Journal of Mathematical Psychology, 40, 1–29. [CrossRef]
Rudd M. E. (2003). Progress on a computational model of human achromatic color processing. Proceedings of the Society of Photo-Optical Instrumentation Engineers, 5007, 170–181.
Rudd M. E. Arrington K. F. (2001). Darkness filling-in: A neural model of darkness induction. Vision Research, 41, 3649–3662. [PubMed] [CrossRef] [PubMed]
Rudd M. E. Popa D. (2007). Stevens' brightness law, contrast gain control, and edge integration in achromatic color perception: A unified model. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 24, 2766–2782. Errata. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 24, 3335. [CrossRef]
Rudd M. E. Zemach I. K. (2007). Contrast polarity and edge integration in achromatic color perception. Journal of the Optical Society A: Optics, Image Science, and Vision, 24, 2134–2156. [CrossRef]
Rudd M. E. Zemach I. K. (2005). The highest luminance rule in achromatic color perception: Some counterexamples and an alternative theory. Journal of Vision, 5 (11): 5, 983–1003, http://www.journalofvision.org/content/5/11/5, doi:10.1167/5.11.5. [PubMed] [Article] [CrossRef]
Rudd M. E. Zemach I. K. (2004). Quantitative properties of achromatic color induction: An edge integration analysis. Vision Research, 44, 971–981. [PubMed] [CrossRef] [PubMed]
Sasaki Y. Watanabe T. (2004). The primary visual cortex fills in color. Proceedings of the National Academy of Sciences, USA, 101, 18251–18256. [CrossRef]
Sawayama M. Kimura E. (2013). Spatial organization affects lightness perception on articulated surrounds. Journal of Vision, 13 (5): 5, 1–14, http://www.journalofvision.org/content/13/5/5, doi:10.1167/13.5.5. [PubMed] [Article] [CrossRef] [PubMed]
Shapley R. Reid R. C. (1985). Contrast and assimilation in the perception of brightness. Proceedings of the National Academy of Sciences, USA, 82, 5983–5986. [CrossRef]
Smithson H. E. (2005). Sensory, computational, and cognitive components of human colour constancy. Philosophical Transactions of the Royal Society, 360, 1329–1346. doi:10.1098/rstb.2005.1633. [CrossRef]
Stevens J. C. (1967). Brightness inhibition re size of surround. Perception and Psychophysics, 2, 189–192. [CrossRef]
Stevens J. C. Marks L. E. (1999). Stevens power law in vision: Exponents, intercepts, and thresholds. In Killeen P. Uttal W. (Eds.), Fechner Day 99: Proceeding of the Fifteenth Annual Meeting of the International Society for Psychophysics (pp. 82–87). Tempe, AZ: ISP.
Stevens S. S. (1967). Intensity functions in neural systems. International Journal of Neurology, 6, 202–209. [PubMed]
Stevens S. S. (1953). On the brightness of lights and loudness of sounds [Abstract]. Science, 118, 576.
Stevens S. S. (1975). Psychophysics: Introduction to its perceptual, neural, and social prospects. New York: Wiley.
Stevens S. S. (1972). Sensory power functions and neural events. In Jameson D. Hurvich L. (Eds.), Handbook of sensory physiology (vol. VII/7, pp. 226–242). Berlin: Springer.
Stevens S. S. (1961). To honor Fechner and repeal his law. Science, 133, 80–86. [CrossRef] [PubMed]
Tolhurst D. J. Movshon J. A. Dean A. F. (1983). The statistical reliability of signals in single neurons in cat and monkey visual-cortex. Vision Research, 23, 775–785. [CrossRef] [PubMed]
van Hateren J. H. Ruderman D. L. (1998). Independent components analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proceedings of the Royal Society (London) B, 265, 2315–2320. [CrossRef]
van Hateren J. H. van der Schaaf A. (1998). Independent components filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society (London) B, 265, 359–366. [CrossRef]
Vladusich T. Lucassen M. P. Cornelissen F. W. (2007). Brightness and darkness as perceptual dimensions. PLoS Computational Biology, 3, 1849–1858. [CrossRef]
Vladusich T. Lucassen M. P. Cornelissen F. W. (2006a). Do cortical neurons process luminance or contrast to encode surface properties? Journal of Neurophysiology, 95, 2638–2649. [CrossRef]
Vladusich T. Lucassen M. P. Cornelissen F. W. (2006b). Edge integration and the perception of brightness and darkness. Journal of Vision, 6 (10): 12, 1126–1145, http://journalofvision.org/6/10/12/, doi:10.1167/6.10.12. [PubMed] [Article] [CrossRef]
von der Heydt R. Peterhans E. (1989). Mechanisms of contour perception in monkey visual cortex. I. Lines of pattern discontinuity. Journal of Neuroscience, 9, 1731–1748. [PubMed]
von der Heydt R. Peterhans E. Baumgartner G. (1984). Illusory contours and cortical neuron responses. Science, 224, 1260–1262. [CrossRef] [PubMed]
von der Heydt R. Zhou H. Friedman H. S. (2003). Neural coding of border ownership: Implications for the theory of figure-ground perception. In Behrmann M. Kimchi R. Olson C. R. (Eds.), Perceptual organization in vision: Behavioral and neural perspectives (pp. 281–304). Mahwah, NJ: Lawrence Erlbaum.
von Helmholtz H. L. F. (1924). Treatise on physiological optics ( Southall J. P. C. Trans.). Rochester, NY: Optical Society of America. (Original work published 1866).
Wallach H. (1948). Brightness constancy and the nature of achromatic colors. Journal of Experimental Psychology, 38, 310–324. [CrossRef] [PubMed]
Wallach H. (1976). On perception. New York: Quadrangle Books.
Wallach H. (1963). The perception of neutral colors. Scientific American, 208, 107–116. [CrossRef] [PubMed]
Walsh V. (1999). How does the cortex construct color? Proceedings of the National Academy of Sciences, USA, 96, 13594–13596. [CrossRef]
Zeki S. (1993). A vision of the brain. Oxford: Blackwell.
Zeki S. Aglioti S. McKeefry D. Berlucchi G. (1999). The neurological basis of conscious color perception in a blind patient. Proceedings of the National Academy of Sciences, USA, 96, 14124–14129. [CrossRef]
Zeki S. Marini L. (1998). Three cortical stages of colour processing in the human brain. Brain, 121, 1669–1686. [CrossRef] [PubMed]
Zemach I. K. Rudd M. E. (2007). Effects of surround articulation on lightness depend on the spatial arrangement of the articulated region. Journal of the Optical Society A: Optics, Image Science, and Vision, 24, 1830–1841. [CrossRef]
Zhou H. Friedman H. S. von der Heydt R. (2000). Coding of border ownership in monkey visual cortex. Journal of Neuroscience, 20, 6594–6611. [PubMed]
Figure 1
 
Example of the effect of spatial context on surface color perception. (a) A white disk, which reflects light of all wavelengths equally, appears to have the color of a hidden monochromatic illuminant when viewed in an otherwise dark room. (b) Land's experiment: an array of papers of various colors is illuminated homogeneously by a combination of three narrowband short-, medium-, and long-wavelength lights. As long as all three lights are turned on, the amplitudes of the lights can be changed without greatly affecting the color appearance of the papers (color constancy).
Figure 1
 
Example of the effect of spatial context on surface color perception. (a) A white disk, which reflects light of all wavelengths equally, appears to have the color of a hidden monochromatic illuminant when viewed in an otherwise dark room. (b) Land's experiment: an array of papers of various colors is illuminated homogeneously by a combination of three narrowband short-, medium-, and long-wavelength lights. As long as all three lights are turned on, the amplitudes of the lights can be changed without greatly affecting the color appearance of the papers (color constancy).
Figure 2
 
How retinex works. The input is an achromatic Land “mondrian” pattern: a collection of papers having arbitrary shapes and a range of gray-scale reflectances, all lit by the same homogenous illuminant, and separated by hard edges. The goal is to assign lightness values (i.e., reflectance estimates) to all of the papers in the mondrian. (a) To compute the relative reflectances of any two papers in the mondrian—for example, the dark gray (9% reflectance) and white (90% reflectance) papers in the illustration—retinex first computes the local luminance ratio at an edge lying along a path between the two papers: for example, the edge between the light gray (45% reflectance) paper and the white paper. Because the mondrian papers are all viewed under the same illuminant, the luminance ratio at this edge is the same as the reflectance ratio of the two papers. (b) Other luminance ratios lying along the path connecting the target papers are similarly computed: for example, the luminance ratio between the light gray (45% reflectance) and dark gray (9% reflectance) papers. The relative reflectance of the two target papers is computed by multiplying the local luminance ratios encountered along the path connecting the two papers, whether or not the papers are contiguous. Absolute lightness judgments are arrived at by arbitrarily assigning the value “white” (a presumed reflectance value of 90%) to the mondrian paper or papers with the highest luminance, then computing reflectance estimates for the other mondrian papers from each paper's reflectance ratio relative to the highest luminance paper or papers, as determined by the chain multiplication of local luminance ratios. The retinex procedure may err in its estimation of absolute reflectance, but it gets the reflectance ratios of the mondrian papers right. Land and McCann (1971) suggested that the computation of local luminance ratios might be carried out by “edge detector” neurons (Hubel and Wiesel cells) in visual cortex. The receptive fields of such neurons are illustrated in cartoon form by the oriented blue icons in the figure.
Figure 2
 
How retinex works. The input is an achromatic Land “mondrian” pattern: a collection of papers having arbitrary shapes and a range of gray-scale reflectances, all lit by the same homogenous illuminant, and separated by hard edges. The goal is to assign lightness values (i.e., reflectance estimates) to all of the papers in the mondrian. (a) To compute the relative reflectances of any two papers in the mondrian—for example, the dark gray (9% reflectance) and white (90% reflectance) papers in the illustration—retinex first computes the local luminance ratio at an edge lying along a path between the two papers: for example, the edge between the light gray (45% reflectance) paper and the white paper. Because the mondrian papers are all viewed under the same illuminant, the luminance ratio at this edge is the same as the reflectance ratio of the two papers. (b) Other luminance ratios lying along the path connecting the target papers are similarly computed: for example, the luminance ratio between the light gray (45% reflectance) and dark gray (9% reflectance) papers. The relative reflectance of the two target papers is computed by multiplying the local luminance ratios encountered along the path connecting the two papers, whether or not the papers are contiguous. Absolute lightness judgments are arrived at by arbitrarily assigning the value “white” (a presumed reflectance value of 90%) to the mondrian paper or papers with the highest luminance, then computing reflectance estimates for the other mondrian papers from each paper's reflectance ratio relative to the highest luminance paper or papers, as determined by the chain multiplication of local luminance ratios. The retinex procedure may err in its estimation of absolute reflectance, but it gets the reflectance ratios of the mondrian papers right. Land and McCann (1971) suggested that the computation of local luminance ratios might be carried out by “edge detector” neurons (Hubel and Wiesel cells) in visual cortex. The receptive fields of such neurons are illustrated in cartoon form by the oriented blue icons in the figure.
Figure 3
 
Average lightness matches for incremental and decremental disk-annulus stimuli as a function of surround luminance on a log-log scale (data from Rudd & Zemach, 2004, 2005). Colored lines are least-square linear regression models of the matches, computed separately for incremental and decremental disks. The absolute values of the slopes of these models estimate induction strength (i.e., contrast effect size) for the two classes of stimuli. Solid lines indicate the slopes that would obtain if the observer matches the disks on luminance or disk/annulus luminance ratio. These theoretical match lines correspond to the matches that are predicted by Gilchrist's anchoring theory to hold in the case of incremental and decremental targets, respectively (see text for details). The luminances of the target disk and the annulus surrounding the match disk were (in units of log cd/m2): 0.5 and −0.112 (incremental disks); 0 and 0.6 (decremental disks).
Figure 3
 
Average lightness matches for incremental and decremental disk-annulus stimuli as a function of surround luminance on a log-log scale (data from Rudd & Zemach, 2004, 2005). Colored lines are least-square linear regression models of the matches, computed separately for incremental and decremental disks. The absolute values of the slopes of these models estimate induction strength (i.e., contrast effect size) for the two classes of stimuli. Solid lines indicate the slopes that would obtain if the observer matches the disks on luminance or disk/annulus luminance ratio. These theoretical match lines correspond to the matches that are predicted by Gilchrist's anchoring theory to hold in the case of incremental and decremental targets, respectively (see text for details). The luminances of the target disk and the annulus surrounding the match disk were (in units of log cd/m2): 0.5 and −0.112 (incremental disks); 0 and 0.6 (decremental disks).
Figure 4
 
Matching the display used in the current experiment. Two identical decremental square patches were surrounded by solid frames having identical luminances, but different widths. The width of the frame surrounding the right (target) square was 0.19°; the width of the frame surrounding the left (match) square was 1.78°. The luminance of the background field was varied over a range that included background luminances lower than the luminances of both squares; background luminances in between the luminances of the squares and frames; and background luminances greater than the frames' luminance. At each of 12 background luminance values, the observer adjusted the match square luminance to match the two squares in lightness.
Figure 4
 
Matching the display used in the current experiment. Two identical decremental square patches were surrounded by solid frames having identical luminances, but different widths. The width of the frame surrounding the right (target) square was 0.19°; the width of the frame surrounding the left (match) square was 1.78°. The luminance of the background field was varied over a range that included background luminances lower than the luminances of both squares; background luminances in between the luminances of the squares and frames; and background luminances greater than the frames' luminance. At each of 12 background luminance values, the observer adjusted the match square luminance to match the two squares in lightness.
Figure 5
 
Lightness matching predictions of the distance-dependent edge integration model supplemented by the rule that edge weights are larger for edges whose dark sides point towards the target than for edges whose light sides point towards the target. The model predicts (1) that the match settings will decrease linearly as a function of background luminance on a log-log scale, and (2) that the absolute value of the slope of the linear function will be larger when the background luminance exceeds the common luminance of the target and match frames then when it is less than the frames' luminance.
Figure 5
 
Lightness matching predictions of the distance-dependent edge integration model supplemented by the rule that edge weights are larger for edges whose dark sides point towards the target than for edges whose light sides point towards the target. The model predicts (1) that the match settings will decrease linearly as a function of background luminance on a log-log scale, and (2) that the absolute value of the slope of the linear function will be larger when the background luminance exceeds the common luminance of the target and match frames then when it is less than the frames' luminance.
Figure 6
 
Dependence of lightness matches on the luminance of the remote background field. The data from each observer have been fitted with least-squares linear regression models, separately for background luminances above and below the frames' luminance. The linear model accounts for 86% of the variances in the lightness matches, averaged across observers and background luminance ranges. The model prediction that the absolute slope of the matching plot should be larger for backgrounds exceeding the frames' luminance is confirmed, independently for each observer.
Figure 6
 
Dependence of lightness matches on the luminance of the remote background field. The data from each observer have been fitted with least-squares linear regression models, separately for background luminances above and below the frames' luminance. The linear model accounts for 86% of the variances in the lightness matches, averaged across observers and background luminance ranges. The model prediction that the absolute slope of the matching plot should be larger for backgrounds exceeding the frames' luminance is confirmed, independently for each observer.
Figure 7
 
Lightness matches made to the papers in a five-paper staircase Gelb display using a Munsell chart. The range of matches made when the papers are viewed in a spotlight is compressed relative to the actual luminance range of the papers. Luminance values on the x-axis are reported as luminance ratios relative to the luminance of the highest reflectance paper (124 cd/m2; 90% reflectance). Data from Cataliotti and Gilchrist (1995); figure from Gilchrist et al. (1999), used with permission.
Figure 7
 
Lightness matches made to the papers in a five-paper staircase Gelb display using a Munsell chart. The range of matches made when the papers are viewed in a spotlight is compressed relative to the actual luminance range of the papers. Luminance values on the x-axis are reported as luminance ratios relative to the luminance of the highest reflectance paper (124 cd/m2; 90% reflectance). Data from Cataliotti and Gilchrist (1995); figure from Gilchrist et al. (1999), used with permission.
Figure 8
 
Replot on a linear scale of the compression data shown in Figure 7. The lightness matches have been fitted with a power law regression model. The power law exponent of the least-squares model is 1/3, the same exponent that characterizes Stevens' power law model of brightness. Data from Cataliotti and Gilchrist (1995).
Figure 8
 
Replot on a linear scale of the compression data shown in Figure 7. The lightness matches have been fitted with a power law regression model. The power law exponent of the least-squares model is 1/3, the same exponent that characterizes Stevens' power law model of brightness. Data from Cataliotti and Gilchrist (1995).
Figure 9
 
Integration paths used to calculate target lightness in two display configurations. (a) Disk-annulus stimulus. (b) Staircase Gelb stimulus. For each configuration, edge integration is carried out only along paths leading from the common background or surround to the target, as indicated by the red arrows. Red lines indicate edges that take part in the edge integration calculation. Edges marked with red X's do not take part in the edge integration calculation.
Figure 9
 
Integration paths used to calculate target lightness in two display configurations. (a) Disk-annulus stimulus. (b) Staircase Gelb stimulus. For each configuration, edge integration is carried out only along paths leading from the common background or surround to the target, as indicated by the red arrows. Red lines indicate edges that take part in the edge integration calculation. Edges marked with red X's do not take part in the edge integration calculation.
Figure 10
 
Contrast induction in an incremental disk-annulus stimulus, also known as a “wedding cake” stimulus (Rudd & Zemach, 2005). The two incremental disks in the figure have the same luminance, but the disk with the lower surrounding annulus luminance (left side) appears lighter. The distance-dependent edge integration model explains the contrast effect on the basis of the hypothesis that the perceptual weight given to the oriented luminance step at the annulus/background edge is smaller than the weight given to the luminance step at the disk/annulus edge. The total luminance step from the background to the disk (in log units) is the same for both disk-annulus stimuli, but the total step in lightness is smaller when the edge that gets the smaller perceptual weight comprises a larger portion of the total luminance step. In general, the model tends to squash the first tier of the wedding cake stimulus profile relative to the second tier in the process of mapping luminance to lightness.
Figure 10
 
Contrast induction in an incremental disk-annulus stimulus, also known as a “wedding cake” stimulus (Rudd & Zemach, 2005). The two incremental disks in the figure have the same luminance, but the disk with the lower surrounding annulus luminance (left side) appears lighter. The distance-dependent edge integration model explains the contrast effect on the basis of the hypothesis that the perceptual weight given to the oriented luminance step at the annulus/background edge is smaller than the weight given to the luminance step at the disk/annulus edge. The total luminance step from the background to the disk (in log units) is the same for both disk-annulus stimuli, but the total step in lightness is smaller when the edge that gets the smaller perceptual weight comprises a larger portion of the total luminance step. In general, the model tends to squash the first tier of the wedding cake stimulus profile relative to the second tier in the process of mapping luminance to lightness.
Figure 11
 
Some displays used in previous studies to generate lightness matches at odds with the predictions of anchoring theory. (a) When a decremental disk surrounded is by two concentric annuli and the outer annulus has a lower luminance than the inner annulus, the luminance of the outer annulus affects the disk lightness (Rudd & Zemach, 2004). This result is contrary to the anchoring theory prediction that a lower luminance surround should not affect the lightness of a target. It furthermore shows that the prediction is violated even when the local contrast at the disk/inner annulus border is held constant. (b) When a decremental target disk is surrounded by two concentric annuli and the outer annulus has the highest luminance in the display, the luminance of the inner annulus affects the target lightness even when the luminance of the outer annulus is held constant. A change in the contrast of the disk/inner annulus border modulates the strength of the lightness induction produced in the disk by the outer annulus, an effect known as “blockage” (Rudd, 2001; Rudd & Arrington, 2001). The first result demonstrates that it is not simply the luminance ratio of the disk relative to the highest luminance that determines the disk lightness, contrary to anchoring theory. An explanation of the second effect in terms of edge integration and contrast gain control between edges was proposed by Rudd and Popa (2007). The blockage effect is not predicted by anchoring theory. (c) The effects demonstrated with the stimulus in (b) are also seen when the outer annulus and dark background is replaced by a white background field (Rudd & Zemach, 2007), which is consistent with the explanation in terms of edge integration and contrast gain control proposed by Rudd and Popa and inconsistent with anchoring theory. The strength of the blockage effect is increased when the annulus width is reduced (Rudd, 2010), as predicted by the Rudd-Popa model. (d) Relative to the case of a disk embedded in a homogeneous surround, the disk lightness is not affected by surround articulation if the surround is articulated by carving it up into sectors comprising alternating light and dark wedges, as long as the average luminance of the surround is kept constant. However, if the surround is carved up to create a radial checkerboard pattern having the same average luminance as the original homogeneous surround, then the disk lightness is affected (Zemach & Rudd, 2007). The results are consistent with the idea that lightness varies depending on the proximity of contextual edges that are co-aligned with the target edges. But the results are not consistent with the account of anchoring theory based on a global comparison of the target lightness with the highest luminance. The stimuli in (a), (b), and (c) have been drawn to scale. Panel (d) adapted from Zemach and Rudd (2007); used with permission.
Figure 11
 
Some displays used in previous studies to generate lightness matches at odds with the predictions of anchoring theory. (a) When a decremental disk surrounded is by two concentric annuli and the outer annulus has a lower luminance than the inner annulus, the luminance of the outer annulus affects the disk lightness (Rudd & Zemach, 2004). This result is contrary to the anchoring theory prediction that a lower luminance surround should not affect the lightness of a target. It furthermore shows that the prediction is violated even when the local contrast at the disk/inner annulus border is held constant. (b) When a decremental target disk is surrounded by two concentric annuli and the outer annulus has the highest luminance in the display, the luminance of the inner annulus affects the target lightness even when the luminance of the outer annulus is held constant. A change in the contrast of the disk/inner annulus border modulates the strength of the lightness induction produced in the disk by the outer annulus, an effect known as “blockage” (Rudd, 2001; Rudd & Arrington, 2001). The first result demonstrates that it is not simply the luminance ratio of the disk relative to the highest luminance that determines the disk lightness, contrary to anchoring theory. An explanation of the second effect in terms of edge integration and contrast gain control between edges was proposed by Rudd and Popa (2007). The blockage effect is not predicted by anchoring theory. (c) The effects demonstrated with the stimulus in (b) are also seen when the outer annulus and dark background is replaced by a white background field (Rudd & Zemach, 2007), which is consistent with the explanation in terms of edge integration and contrast gain control proposed by Rudd and Popa and inconsistent with anchoring theory. The strength of the blockage effect is increased when the annulus width is reduced (Rudd, 2010), as predicted by the Rudd-Popa model. (d) Relative to the case of a disk embedded in a homogeneous surround, the disk lightness is not affected by surround articulation if the surround is articulated by carving it up into sectors comprising alternating light and dark wedges, as long as the average luminance of the surround is kept constant. However, if the surround is carved up to create a radial checkerboard pattern having the same average luminance as the original homogeneous surround, then the disk lightness is affected (Zemach & Rudd, 2007). The results are consistent with the idea that lightness varies depending on the proximity of contextual edges that are co-aligned with the target edges. But the results are not consistent with the account of anchoring theory based on a global comparison of the target lightness with the highest luminance. The stimuli in (a), (b), and (c) have been drawn to scale. Panel (d) adapted from Zemach and Rudd (2007); used with permission.
Figure 12
 
Model of lightness computation in the ventral stream of visual cortex. The achromatic color-encoding properties of lightness and darkness neurons in area V4 are explained by a mechanism whereby the large-scale receptive fields of these neurons spatially integrate the outputs of V1 neurons (transmitted via V2) that encode oriented contrast and thus respond to edges having a particular contrast polarity. (a) Lightness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose light sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of ON cell, versus OFF cell, stimulation in the retina and LGN. (b) Darkness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose dark sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of OFF cell, versus ON cell, stimulation in the retina and LGN. A further elaborated version of this cortical edge integration model that also incorporates effects of contrast gain control between edges and top-down control of edge weights can be found in Rudd (2010).
Figure 12
 
Model of lightness computation in the ventral stream of visual cortex. The achromatic color-encoding properties of lightness and darkness neurons in area V4 are explained by a mechanism whereby the large-scale receptive fields of these neurons spatially integrate the outputs of V1 neurons (transmitted via V2) that encode oriented contrast and thus respond to edges having a particular contrast polarity. (a) Lightness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose light sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of ON cell, versus OFF cell, stimulation in the retina and LGN. (b) Darkness neurons spatially integrate the outputs of V1 neurons that fire in response to edges whose dark sides point in the direction of the V4 neuron's receptive field center. The V1 neurons, in turn, are driven to fire by a predominance of OFF cell, versus ON cell, stimulation in the retina and LGN. A further elaborated version of this cortical edge integration model that also incorporates effects of contrast gain control between edges and top-down control of edge weights can be found in Rudd (2010).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×