September 2003
Volume 3, Issue 8
Free
Research Article  |   September 2003
The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes
Author Affiliations
Journal of Vision September 2003, Vol.3, 2. doi:https://doi.org/10.1167/3.8.2
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      H. Boyaci, L. T. Maloney, S. Hersh; The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. Journal of Vision 2003;3(8):2. https://doi.org/10.1167/3.8.2.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We examined how observers discount perceived surface orientation in estimating perceived albedo (lightness). Observers viewed complex rendered scenes binocularly. The orientation of a test patch was defined by depth cues of binocular disparity and linear perspective. On each trial, observers first estimated the orientation of the test patch in the scene by means of a gradient probe and then matched its perceived albedo to a reference scale. We found that observers’ perception of orientation was nearly veridical and that they substantially discounted perceived orientation in estimating perceived albedo.

Introduction
The amount and spectrum of light reflected from a surface depend on the reflectance characteristics of the surface and on the light sources that illuminate it. It can also depend on the orientation of the surface with respect to light sources present in the scene. Here we examine the relationship between the perceived orientation and the perceived albedo (lightness) of matte surfaces in rendered three-dimensional (3D) scenes. 
The Lambertian Model
In Figure 1, we illustrate how light is absorbed and re-emitted from a Lambertian (matte) surface patch in a simple scene illuminated by two light sources, a punctate source and a diffuse source. The intensity of the punctate light source is denoted by EP and the intensity of the diffuse light source by ED. The angle between the direction to the punctate source and the surface normal N at a point is denoted by ϑ and the angle between the surface normal and the direction to the viewer is denoted by ν
Figure 1
 
A simple scene. A small region of a Lambertian surface is illuminated by a combination of a punctate light source and a diffuse light source. The mean intensity of light re-emitted from the surface is determined by the surface albedo α, the intensity of the diffuse light source, ED, the intensity of the punctate light source, EP, and the angle ϑ between the surface normal N and the direction to the punctate source. It does not depend on ν, the direction to the viewer (so long as the viewer can see the surface region) because light absorbed by a small Lambertian surface is re-emitted uniformly in a hemisphere centered on the patch.
Figure 1
 
A simple scene. A small region of a Lambertian surface is illuminated by a combination of a punctate light source and a diffuse light source. The mean intensity of light re-emitted from the surface is determined by the surface albedo α, the intensity of the diffuse light source, ED, the intensity of the punctate light source, EP, and the angle ϑ between the surface normal N and the direction to the punctate source. It does not depend on ν, the direction to the viewer (so long as the viewer can see the surface region) because light absorbed by a small Lambertian surface is re-emitted uniformly in a hemisphere centered on the patch.
When the punctate source and the viewer are on the same side of the surface patch (0° ≤ θ, ν < 90°), the luminance emitted from the surface along the direction to the viewer is,  
(1)
where α is the albedo of the surface, the fraction of absorbed light that is re-emitted. Note that in Equation 1, EP and ED are in units of luminance. In the Lambertian model, the luminance does not depend upon the direction to the viewer ν but only the angle ϑ between the surface normal N and the direction to the light. When the punctate light is behind the plane of the surface patch (0° ≤ v < 90, θ 90°) the surface receives only diffuse illumination. Equation 1 becomes,  
(2)
. For convenience, we define E = EP + ED to be the total light intensity and π = EP / E to be the relative punctate intensity, the proportion of punctate light intensity in the total light intensity E
Geometric Discounting Function
We define the Lambertian geometric discounting function,  
(3)
We can then combine Equations 1 and 2 as  
(4)
. Note that when π is 0 (the light is perfectly diffuse) or ϑ is 0° (the direction to the punctate light is orthogonal to the surface patch), the geometric discounting function is 1. When ϑ is outside the range 0° to 90°, the punctate light is behind the surface and the geometric discounting function reflects only the diffuse component of the light. As we vary the angle ϑ, the right hand side reaches a maximum when ϑ is 0 and the light from the punctate source falls perpendicularly onto the surface. We solve the equation above for α to get  
(5)
Asymmetric Lightness Matching
The geometric discounting function can be interpreted as follows. Suppose that an observer views two surface patches in a scene illuminated by a combination of a diffuse and a punctate light source. The surface normal of one surface (the reference surface) points directly at the punctate light source (ϑR = 0°). The angle between the surface normal of the other (the test surface) and the direction to the punctate light source is ϑT. The luminance of the test surface is set to a fixed luminance LT. The observer is asked to adjust the luminance LR of the reference surface until the perceived albedo αR of this patch matches the perceived albedo αR of the test surface. This task is an example of an asymmetric lightness matching task. 
If the observer correctly employs Equation 5 in estimating albedo, then when the albedos of the surfaces are judged to match,  
(6)
, Because ϑR = 0°, we have Γ(θR, π) = 1 the previous equation can be rearranged as  
(7)
. If we repeat this asymmetric lightness setting task for many values of ϑT, then the setting ratio of the reference and test luminances, Λ = LR/LT, plotted against ϑT, traces the curve Γ(ϑR, π). 
We refer to this plot as the observer’s geometric discounting function. By means of the asymmetric lightness task just described, we can estimate the observer’s geometric discounting function and compare it to the theoretical form for a Lambertian surface in Equation 3. In the experiment below, though, we will not assume that the observer has access to the correct values of the relative punctate intensity π, or the angle between the test surface and the direction to the punctate light source, ϑT. We will also not assume that observers perceive the reference patch as belonging to the scene and illuminated by the same light sources (see the discussion on local versus global frameworks in Gilchrist et al., 1999). We will show that we can still estimate the observer’s geometric discounting function up to an unknown positive scaling factor by measuring the setting ratio, Λ. 
To summarize, our goal is to estimate the form of the observer’s geometric discounting function and compare it to Equation 3. We will allow for the possibility that the observer’s perceptions of the layout of the scene, the location of the punctate light source, and the light source intensities are in error. If the observer’s geometric discounting function matches the Lambertian geometric discounting function, then the observer is discounting changes in surface orientation in estimating surface albedo. We next review previous research concerning this particular constancy in human vision. 
Previous Research
Hochberg and Beck (1954) designed an experiment in which they placed a trapezoid upright on a table. When viewed monocularly, the trapezoid appeared to be lying flat on the table due to the available perspective cues. When viewed binocularly, it appeared to be orthogonal to the table. Several cubes were placed on the table to indicate the direction of the punctate light ϑ and the relative punctate intensity π. They found that observers’ estimates of the lightness (perceived albedo) of the surface differed in the two viewing conditions. The difference was in the direction consistent with Equation 5, but much smaller in magnitude than the equations would predict. Hochberg and Beck state that when there were no cubes, the effect of viewing condition disappeared, consistent with the claim that observers used information about the direction of the punctate light source and its relative intensity in estimating surface albedo. 
Flock and Freedberg (1970) also used conflicting perspective and stereo cues to manipulate perceived surface orientation. The test object was a trapezoid placed upright on a desk whose perceived orientation differed in monocular and binocular viewing. Flock and Freedberg studied the effect of the presence or absence of cubes and the albedo of the achromatic table surface. They found the largest effect of viewing condition on perceived albedo when the cubes were present and the table surface was white. They note that increases in the effect with the white surface could be due to increased visibility of luminance gradients. However, even the largest effect corresponded to a change in perceived orientation of between 1.4° and 4.8° instead of the intended change of 90°. They concluded that the effect was in the correct direction but it was far too small to support the claim that perceived orientation has any meaningful effect on lightness. 
Epstein (1961) also used the same paradigm of conflicting cues to study the effect of perceived orientation on lightness. The stimuli consisted of a plane rotated 45° and a triangle attached to the plane with an angle of 20°. When viewed monocularly, the triangle looked as if it was lying flat on the plane; when viewed binocularly, the author states that its actual orientation was perceived. Epstein found no effect of viewing condition and concluded that the results support a local contrast explanation. However, this was probably due to the design of the experiment: the light source was circular and centered along the line of sight of the observer, and was not visible to the observer. There were no other objects visible in the scene to provide any cues. In a similar experimental design, Redding and Lester (1980) also found no effect of viewing condition on lightness. There is no indication that the authors in any of these studies attempted to measure the perceived orientation of the test surface in either the monocular or binocular viewing condition. 
Coplanar Ratio Hypothesis
Gilchrist (1977,1980) performed an elegant series of experiments to investigate the effect of perceived orientation on lightness. He sought to explain why previous research had failed to find the expected effect. He used the same method of conflicting cues as earlier studies. Test patches were trapezoids whose perceived orientation and shape differed in monocular and binocular vision. In contrast to the earlier experiments, Gilchrist arranged the stimuli so that in different viewing conditions, the trapezoid appeared to lie coplanar with one or the other of two background planes, which were perpendicular to each other. In monocular vision, the test patch looked like a rectangle lying flat on a horizontal plane, but in binocular vision, it was perceived as coplanar with a vertical plane perpendicular to the horizontal one. With this setup, he found the expected effect of perceived orientation on lightness. When the experiment was repeated without the horizontal and vertical background planes, the effect disappeared. 
Gilchrist concluded that his results could be explained by what he referred to as the coplanar ratio hypothesis: local contrast plays a role in lightness judgment only if the regions of interest are coplanar and at the same depth. Gilchrist credits this hypothesis to Kardos (1934). Therefore, he concluded, lightness must be intimately related to the perceived geometric layout of the scene. 
Schirillo and Shevell (1993) tested the coplanar ratio hypothesis using two achromatic Mondrians viewed in a stereoscope. The Mondrians appeared to be at different depths and one appeared to be more brightly lit than the other. Observers judged the lightness of a test patch placed at varying depths between the two Mondrians. They found little or no effect of the depth of the patch on perceived albedo (as measured by lightness judgments) even when the patch was coplanar with one or the other Mondrian. (In their abstract, Schirillo & Shevell state that they found an effect of depth on lightness judgments. However, in their “Experiment 2: Lightness Matching with gray surrounding,” only observer JS, the first author of the paper, showed the expected behavior. Two other naive subjects did not. In their "Experiment 3: Black surrounding,” they did find an effect but it was only 17% on average instead of the expected 500%.) 
Related Effects of Scene Layout
Knill and Kersten (1991) reported a striking effect of perceived surface curvature on lightness. They investigated the Craik-O’Brien-Cornsweet effect (Cornsweet, 1970), which is customarily interpreted as evidence for contrast ratio theories. They rendered the outline of the classical stimulus in such a way that it looked like two 3D cylinders placed side by side, instead of two flat rectangles. Although there was no change in the local contrast ratio, as soon as the stimuli were perceived as 3D cylinders, the classical effect almost completely disappeared. They state that the change in the luminance of the surface was perceived either due to illumination change or due to surface albedo change, depending on the assumed 3D shape. When the change in luminance was attributed to illumination change, the usual effect almost disappeared. Only when the change was attributed to the surface albedo was the classical Craik-O’Brien-Cornsweet effect present. Pessoa, Mingolla and Arend (1996) showed that perception of the 3D shape of ellipsoids improved the correct lightness matching. Bloj, Kersten, and Hurlbert (1999) showed that the perceived color of a surface is affected by the perceived 3D shape through mutual illumination. All of these results suggest that color and lightness are influenced by the 3D layout of the scene. 
Experiment
Introduction
In this experiment, we estimate the form of the observer’s geometric discounting function (Equation 3) for six observers by means of an asymmetric lightness matching task. We allow for the possibility that the observer’s estimates of scene layout and lighting are in error. In particular, we asked observers, on every trial, to estimate the orientation of a test surface as well as to match the perceived albedo of a reference surface to that of the test surface. 
Methods
Stimuli
The stimuli were computer rendered, 3D complex scenes composed of simple objects with different shapes (such as spheres and boxes), and with different reflectance properties (such as shiny, matte, and transparent). All scenes were rendered with the Radiance software package (Larson and Shakespeare, 1996). Each scene was rendered twice with slightly different viewpoints corresponding to the positions of the observer’s eyes. A stereo pair for a typical scene is shown in Figure 2
Figure 2
 
A stereo image pair. The stimuli were computer rendered 3D scenes presented binocularly in a Wheatstone stereoscope. All scenes contained a matte, gray central cube with an attached matte test surface, “hinged” to one side of the cube. The central cube was always displayed in the same orientation and with the same albedo, and roughly in the same location. From trial to trial, either the orientation of the test patch, its albedo, or both could vary. Each scene also contained several additional objects, specular, matte, or both, that were varied from trial to trial. To form a stereo pair, each scene was rendered twice with different view points corresponding to the position of the eyes of the observer. The stereo pair is displayed for uncrossed-fusion (use the left-hand pair), once for crossed-fusion (use the right hand pair).
Figure 2
 
A stereo image pair. The stimuli were computer rendered 3D scenes presented binocularly in a Wheatstone stereoscope. All scenes contained a matte, gray central cube with an attached matte test surface, “hinged” to one side of the cube. The central cube was always displayed in the same orientation and with the same albedo, and roughly in the same location. From trial to trial, either the orientation of the test patch, its albedo, or both could vary. Each scene also contained several additional objects, specular, matte, or both, that were varied from trial to trial. To form a stereo pair, each scene was rendered twice with different view points corresponding to the position of the eyes of the observer. The stereo pair is displayed for uncrossed-fusion (use the left-hand pair), once for crossed-fusion (use the right hand pair).
Each scene was rendered with a mixture of diffuse and punctate illumination. Each contained a large grey cube whose surface properties were never varied and whose location remained approximately unchanged. We refer to this cube as the central cube. A test patch was attached to the central cube. We varied its orientation and albedo from trial to trial as described below. The center of the test patch was always in the same position in the scene. Each scene also contained a number of small objects that were randomly varied from trial to trial. These objects could be shiny, matte, or partly shiny and matte. 
Coordinate System and Spatial Arrangement
We used a spherical coordinate system (Ψ,ϕ,r) to specify a simulated scene (Figure 3A). This coordinate system (Figure 3A) has its origin at the center of the test patch and is most easily specified if we first set up a Cartesian coordinate system (x, y, z) with the same origin. The z-axis lies along the observer’s line of sight to the center of the test patch. The y-axis is vertical, in the fronto-parallel plane. The x-axis is horizontal, also in the fronto-parallel plane. The positive half of the x-axis, y-axis, and z-axis are to the observer’s right, upward, and toward the observer, respectively. If we represent a point as a vector (x, y, z), the angle Ψ is the angle between the positive z-axis and the projection of the point into the xz-plane, and it ranges from −180 to 180 degrees. The angle ϕ is the angle between the point and the xz-plane, and it ranges between −90 and +90 degrees. Any direction away from the origin can be specified by coordinates (Ψ,ϕ). The positive x-axis, for example, is (90°,0°), the positive z-axis is (0°,0°), the direction toward the observer. The third coordinate r is the radial distance from the origin to a point. We report radial distances and other measurements in centimeters, at the size that the simulated objects were presented to the observer. 
Figure 3
 
Coordinate systems. A. Cartesian and spherical coordinates. The origin of the coordinate system was the center of the test surface. Cartesian coordinates are specified as (x, y, z) vectors. The z-axis lies along the line of sight from the observer to the origin. The y-axis is vertical, and the x-axis horizontal, both in the fronto-parallel plane containing the origin. The positive side of the x-axis is to the observer’s right, of the y-axis is upward, and of the z-axis is toward the observer. The spherical coordinate system (Ψ,ϕ,r) is the angle between the z-axis and the projection of any vector into the xz-plane. The angle ϕ is the angle between a vector and the xz-plane. The radius r is measured from the origin in centimeters. N is the unit normal to the surface; P is a vector in the direction of the punctate source. B. Spherical coordinates and the test surface. The viewpoint is from above, along the y-axis, which is shown with the symbol ⊙, pointing out of the page. The central cube is shown in outline. The possible orientations of the test patch (ΨT = {s−50°, −40°, −30°, 0°, 30°, 40°, 50°}) are shown as well as the direction to the projection of the punctate light source onto the xz-plane. The origin of the coordinate system is always in the middle of the test patch, the observer is always at (x, y, z)=(0, 0, 70 cm), which is (Ψ, ϕ, r) = (0°, 0°, 70 cm) in spherical coordinates, and the punctate light source is always at (x, y, z)= (−55 cm, 40 cm, 140 cm), which is (Ψ,ϕ,r)=(−21.44°, 14.89°, 155.64 cm) in spherical coordinates. To maintain the center of the test patch at the origin, the central cube is displaced slightly from trial to trial depending on the orientation of the test patch being displayed.
Figure 3
 
Coordinate systems. A. Cartesian and spherical coordinates. The origin of the coordinate system was the center of the test surface. Cartesian coordinates are specified as (x, y, z) vectors. The z-axis lies along the line of sight from the observer to the origin. The y-axis is vertical, and the x-axis horizontal, both in the fronto-parallel plane containing the origin. The positive side of the x-axis is to the observer’s right, of the y-axis is upward, and of the z-axis is toward the observer. The spherical coordinate system (Ψ,ϕ,r) is the angle between the z-axis and the projection of any vector into the xz-plane. The angle ϕ is the angle between a vector and the xz-plane. The radius r is measured from the origin in centimeters. N is the unit normal to the surface; P is a vector in the direction of the punctate source. B. Spherical coordinates and the test surface. The viewpoint is from above, along the y-axis, which is shown with the symbol ⊙, pointing out of the page. The central cube is shown in outline. The possible orientations of the test patch (ΨT = {s−50°, −40°, −30°, 0°, 30°, 40°, 50°}) are shown as well as the direction to the projection of the punctate light source onto the xz-plane. The origin of the coordinate system is always in the middle of the test patch, the observer is always at (x, y, z)=(0, 0, 70 cm), which is (Ψ, ϕ, r) = (0°, 0°, 70 cm) in spherical coordinates, and the punctate light source is always at (x, y, z)= (−55 cm, 40 cm, 140 cm), which is (Ψ,ϕ,r)=(−21.44°, 14.89°, 155.64 cm) in spherical coordinates. To maintain the center of the test patch at the origin, the central cube is displaced slightly from trial to trial depending on the orientation of the test patch being displayed.
Test Patch
The test patch was 4.8 cm by 3.6 cm. The test patch could appear at any one of seven orientations, specified by the normal to the surface patch (ΨT, ϕT): ΨT could take on any of the values {−50°, −40°, −30°, 0, 30°, 40°, 50°} while ϕT was always 0°. When ΨT = −30°, the test patch was orthogonal to the face of the central cube to which it was attached. Figure 3B shows a schematic of the scene, seen from above, with the seven possible orientations of the test patch marked. For reference, the large cube in the center of the scene is shown. The test patch was rendered with one of two slightly different albedos, DARK and LIGHT. The DARK patch had albedo α = 0.55. We included two different albedos to encourage observers to make fine lightness discriminations in the lightness task described below. 
Light Sources
The scene was illuminated by a diffuse light source and a punctate light source. The punctate light source was placed at (ΨP, ρP, rP) = (−21.44°, 14.89°, 155.64 cm). In Cartesian coordinates, the punctate light source was 70 cm behind, 55 cm to the left, and 40 cm above the observer’s viewpoint. The punctate source was sufficiently far from the scene so that we could treat the punctate light source as collimated across the extent of the test patch. The direction to the punctate source is specified by the angles, (ΨP, ϕP) = (−21.44°, 14.89°). It was never varied. The diffuse-punctate balance was always π = 0.62. 
The angle ϑ between the normal to the test patch (ΨT, ϕT) and the direction to the light source (ΨP, ϕP) is,  
(8)
. Because ϕT = 0 over the course of the experiment, we can simplify the above to  
(9)
or,  
(10)
Apparatus
The left and right images were presented to the corresponding eye of the observer on two 21″ Sony Trinitron Multiscan GDM-F500 monitors placed to the observer’s left and right (Figure 4). The screens on these monitors are close to physically flat, with less than 1 mm of deviation across the surface of each monitor. Two small mirrors were placed directly in front of the observer’s eyes. These mirrors reflected the images displayed on the left and right monitors upon the corresponding eye of the observer. Look-up tables were used to correct the nonlinearities in the gun responses and to equalize the display values on the two monitors. The tables were prepared after direct measurements of the luminance values on each monitor with a Pritchard PR-650 spectrometer. The maximum luminance achievable on either screen was 114 cd/m2. The stereoscope was contained in a box 124 cm on a side. The front face of the box was missing and that is where the observer sat in a chin/head rest. The interior of the box was coated with black flocked paper (Edmund Scientific, Tonawanda, NY) to absorb stray light. Only the stimuli on the screens of the monitors were visible to the observer. The casings of the monitors and any other features of the room were hidden behind the nonreflective walls of the enclosing box. 
Figure 4
 
The experimental apparatus. Stimuli were displayed in a computer-controlled Wheatstone stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors of the stereoscope. The observer viewed them by means of small mirrors placed in front of his or her eyes. In the fused image, the test surface appeared approximately 70 cm in front of the observer. This distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation cues and other depth cues.
Figure 4
 
The experimental apparatus. Stimuli were displayed in a computer-controlled Wheatstone stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors of the stereoscope. The observer viewed them by means of small mirrors placed in front of his or her eyes. In the fused image, the test surface appeared approximately 70 cm in front of the observer. This distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation cues and other depth cues.
Additional light baffles were placed near the observer’s face to prevent light from the screens reaching the observer’s eyes directly. The optical distance from each of the observer’s eyes to the corresponding computer screen was 70 cm. To minimize any conflict between binocular disparity and accommodation depth cues, the test patches were rendered to be exactly 70 cm in front of the observer. The monocular fields of view were 55 deg × 55 deg of visual angle each. The observer’s eyes were approximately at the same height as the center of the scene being viewed, which was also the height of the center of the test patch. 
Tasks
The observer had two tasks to perform on each trial. He or she first estimated the orientation of the test patch by adjusting a monocular stick-and-circle gradient probe superimposed on the middle of the test patch (Figure 5A). The orientation of the probe was controlled by moving a computer mouse. Note that the gradient probe was presented monocularly and the observer had only one degree of freedom in setting, the azimuth Ψ. The elevation was always set to the correct value, ϕ = 0°. Observers reported no difficulty with setting the probe and were unaware that it was visible in only the right eye. 
When the observer was satisfied with the gradient settings, he pressed a mouse button to go on to the second task in the trial. The setting probe disappeared and an array of lightness reference chips appeared on the right hand side of the scene (Figure 5B). The array of chips was presented monocularly, to the right eye. The observer’s second task was to match the lightness of the test patch by choosing one of the reference chips. The order of the chips was randomly permuted for each successive trial. The observer chose the chip that he or she thought matched the test patch in lightness. The key variable here is the measured luminance of the reference patch that the observer chose as matching the test patch. 
Figure 5
 
Orientation and lightness matching tasks. A. Orientation task. The observer was asked to adjust a monocular circle-and-stick gradient probe until the circle fell within the plane of the test surface and the stick was orthogonal to the test surface. The gradient probe was presented monocularly, to the right eye. B. Lightness matching. The observer was asked to select the reference surface on a lightness scale that matched the test surface in “surface material.” The order of reference surfaces on the lightness scale was re-randomized on every trial. The lightness scale was presented monocularly, to the right eye.
Figure 5
 
Orientation and lightness matching tasks. A. Orientation task. The observer was asked to adjust a monocular circle-and-stick gradient probe until the circle fell within the plane of the test surface and the stick was orthogonal to the test surface. The gradient probe was presented monocularly, to the right eye. B. Lightness matching. The observer was asked to select the reference surface on a lightness scale that matched the test surface in “surface material.” The order of reference surfaces on the lightness scale was re-randomized on every trial. The lightness scale was presented monocularly, to the right eye.
Software
The experimental software was written by us in the C language. We used the X Window System, Version 11R6 (Scheifler & Gettys, 1996) running under Red Hat Linux 6.1 for graphical display. The computer was a Dell 410 Workstation with a Matrox G450 dual head graphics card and a special purpose graphics driver from Xi Graphics that permitted a single computer to control both monitors. The monitors were synchronized by a common signal from the Matrox board. We used the open source physics-based rendering package Radiance (Larson & Shakespeare, 1996) to render the left and right images that comprised the stereo pair for a given virtual scene. The output of the rendering described above was a stereo image pair with floating point RGB triplets for each pixel. These triplets were interpreted as the relative luminance values that would arrive at points in the retinas if the observer’s eyes were at the viewpoints selected in the virtual scene. We translated the output relative luminance values to 24-bit graphics codes, correcting for nonlinearities in the monitors’ responses by means of measured look-up tables for each monitor. 
Procedure
The observer repeated each of the 14 conditions of the experiment (7 test patch orientations, 2 test patch albedos) 20 times, for a total of 280 trials. The order of presentation was randomized. The observer was allowed to perform a few practice trials before the actual experiment started, until he or she was completely comfortable with both tasks. The experiment was paced by the observer. The observer was encouraged to take a short break after 140 trials. The entire experiment took each observer less than an hour. 
Observers
Six observers completed the experiment. All were experienced psychophysical observers who were unaware of the purpose of the experiment. 
Instructions to the Observer
For the orientation task, observers were simply instructed to move the probe until the probe circles were in the plane of the test surface. For the lightness task, we asked observers to choose the reference patch on the lightness scale that appeared to be made of the same material as the test surface (Arend & Reeves, 1986). 
Analysis and Results
The key dependent variables are the observer’s estimates of the orientation of the test patch Image not available on each trial and the setting Λ = LR / LT of the luminance of the reference surface to the luminance of the test surface that was matched to it. In particular, if the setting ratio is unaffected by perceived orientation, then the observer is not discounting perceived orientation from estimates of perceived albedo. We will also look for an effect of true orientation (as opposed to perceived orientation) on perceived albedo, but, as we will see in a moment, there is little difference between perceived and true orientations in these scenes. We first report the orientation estimates Image not available and then the effect of this perceived orientation on setting ratios Λ. 
Orientation Settings
Observers’ mean settings of surface orientation in each condition are plotted against the true values in Figure 6. We have also plotted the best-fitting regression line to the results for each observer. There was no significant difference in regression coefficients for the test patches with LIGHT albedo and with DARK albedo for any of the six observers. Separate two-way ANOVAs for each observer indicated no significant interaction between albedo and orientation: the p values for observers JJG, CBC, EPB, LR, MM, and NB were p = 0.095, 0.260, 0.975, 0.520, 0.596, 0.779 respectively. There was no significant main effect of albedo (the smallest p value was p = 0.021 for observer MM, for observers CBC, EPB, JJG, LR, and NB the p values are p= 0.996,0.783,0.430,0.105,0.137). The main effect of orientation was significant for all observers (p < 0.001). For all observers, the slopes were significantly less than 1 (p < 0.05). However, the deviations from veridical were modest: all slopes but one were between 0.84 and 0.96, the exception being 0.63 for observer NB. 
Figure 6
 
Results: Orientation settings. Results for six observers are shown. Each observer’s mean orientation settings Image not available are plotted against the corresponding true orientations ΨT, using filled squares for DARK tests and empty squares for LIGHT ones. The blue solid line has unit slope. The settings of a veridical observer would fall on that line. The red dashed lines show the linear regression fit to the observer’s settings. The slope of the fit for each observer is given in the plots. All slopes are significantly different from 1 (with an overall Type I error rate of 0.05 and a Bonferroni correction for six multiple tests).
Figure 6
 
Results: Orientation settings. Results for six observers are shown. Each observer’s mean orientation settings Image not available are plotted against the corresponding true orientations ΨT, using filled squares for DARK tests and empty squares for LIGHT ones. The blue solid line has unit slope. The settings of a veridical observer would fall on that line. The red dashed lines show the linear regression fit to the observer’s settings. The slope of the fit for each observer is given in the plots. All slopes are significantly different from 1 (with an overall Type I error rate of 0.05 and a Bonferroni correction for six multiple tests).
Lightness Estimates
We showed above (Equation 7) that for a Lambertian observer who based his or her albedo matches on Equation 5, the luminance setting ratio Λ = LT / LT would equal the Lambertian geometric discounting function Γ (ϑ, π). In deriving this identity, we assumed that the observer made use of accurate estimates of the parameters that describe the orientation and albedo of the test surface and the orientation and lighting of the references surfaces. What happens to Equation 7 if the observer’s estimates of these parameters are in error? Let Image not available, etc. denote the estimates of ϑ, E, etc. that an observer substitutes into Equation 5 in order to estimate the albedo of the test surface. Let Image not available, etc. denote the corresponding estimates for the references surfaces. We assume that these estimates, although unknown, do not change over the course of the experiment. Let LT denote the observer’s estimate of the luminance of the test patch on a trial, Image not available, the observer’s estimate of the luminance of the reference patch. Then we can represent the luminance of the test patch as Image not available and that of the reference patch as Image not available. Once the observer has chosen a reference surface whose apparent albedo matches the apparent albedo of the test surface, we have Image not available. Then the equation for Λ becomes  
(11)
where m is a multiplicative constant. 
The net effect of these assumptions is that the luminance ratio of the Lambertian observer is proportional to the Lambertian geometric discounting function Γ Image not available but with whatever estimates of the orientation and lighting conditions the observer has substituted for the correct values. We will use this fact to obtain estimates of some of these parameters from each observer’s data. In our analysis, we do not assume that we know how the observer interprets the lighting conditions of the reference patch. 
Note first of all that we cannot determine all of the observer’s estimates of parameters in Equation 5 and in the other equations. We cannot, for example, obtain an estimate of E, the total light intensity, from asymmetric lightness matches because scaling the overall light intensity by the same factor for the test scene and for the reference surface would likely lead to the same matching behavior. We insert Equation 10 into Equation 1 to obtain,  
(12)
. Because ϕP never changes throughout the experiment, EP is confounded with cosϕP. Changing the elevation ϕP of the punctate source is equivalent to scaling its intensity. We replace EP cosϕP by EPeq, the equivalent punctate light intensity, and obtain Image not available. Further, we define Image not available, the equivalent relative punctate source intensity. The geometric discounting function of the Lambertian observer can then be written as Γ(ΨT − ΨP;πeq). Given an estimate of the luminance that arrives at the eye, ^L, a visual system that has available estimates of E, π, and ϑ (denoted ^E, ^π and , respectively) can compute an estimate of the albedo of the surface by substituting these estimates into Equation 5,  
(13)
. An observer’s estimate depends on his or her estimates of the orientation of the test, denoted Image not available, and his or her estimates of the direction to the punctate light source, denoted Image not available, through Equation 8. The observer might also misestimate the overall intensity of the light, E. Misestimation of E would simply lead to an overall scaling of perceived albedo but, as explained above, our estimates of perceived albedo are only determined up to an unknown scaling factor. Misestimation of E would simply alter this unknown value. We explicitly estimated Image not available in Image not available by asking the observer to perform the orientation task. 
Now suppose that we hold the lighting conditions constant, in particular the parameters ΨP, πeq, and we vary the orientation of the surface by varying ΨT, as we do in the experiment. We plot examples of the geometric discounting function Γ(ΨT − ΨP;πeq) as a function of ΨT for different values of Πeq and ΨP in Figure 7. Notice that changes to the parameter ΨP move the curve to the left and right (Figure 7A), whereas changes to the parameter πeq increase or decrease curvature (Figure 7B). When πeq = 0, there is no punctate component to the illumination, and the geometric discounting function is a horizontal line. Indeed, the second derivative of the function Γ(ΨT − ΨP;πeq) with respect to surface orientation ΨT evaluated at its minimum, ΨT = ΨP, is,  
(14)
. It is evident that, given the curve Image not available, we can estimate the parameters Image not available (where the function takes on its minimum) and Image not available (the curvature at the minimum). For a Lambertian observer with possibly erroneous estimates of Image not available and Image not available, then, we can recover estimates of both of these parameters from measurements of the luminance setting ratio, Λ(so long as πeq > 0). 
Figure 7
 
Geometric discounting function. We plot Γ(ΨT − ΨP;π) for different values of ΨP (the direction of projection of the punctate light source on the xz-plane) and π (the relative punctate intensity). In both A and B, blue solid lines show the theoretical curves obtained from Equation 3 for the correct values in the experiment reported. In contrast, if π = 0 (the light is diffuse only), then the geometric discounting function is a constant, 1, shown by the black dashed line in B. Intuitively, the lightness constant observer should assign a higher albedo for the same luminance as a surface is rotated away from the punctate light source. The function Γ(ΨT − ΨP;π) specifies how much greater the assigned albedo should be. Misestimation of the punctate light direction, ΨP, moves the curve left or right, as shown by the red lines in A. Misestimation of the relative punctate intensity, π, changes the curvature of the function, as shown by red dashed lines in B.
Figure 7
 
Geometric discounting function. We plot Γ(ΨT − ΨP;π) for different values of ΨP (the direction of projection of the punctate light source on the xz-plane) and π (the relative punctate intensity). In both A and B, blue solid lines show the theoretical curves obtained from Equation 3 for the correct values in the experiment reported. In contrast, if π = 0 (the light is diffuse only), then the geometric discounting function is a constant, 1, shown by the black dashed line in B. Intuitively, the lightness constant observer should assign a higher albedo for the same luminance as a surface is rotated away from the punctate light source. The function Γ(ΨT − ΨP;π) specifies how much greater the assigned albedo should be. Misestimation of the punctate light direction, ΨP, moves the curve left or right, as shown by the red lines in A. Misestimation of the relative punctate intensity, π, changes the curvature of the function, as shown by red dashed lines in B.
Figure 8 shows the empirical geometric discounting functions for all six observers, normalized so the minima fall at 1 on the vertical axis. As mentioned above, if an observer were perfectly lightness constant, then his or her data would (after scaling by a multiplicative constant) fall on the curve of the geometric discounting function Γ(ϑ, π). This curve is plotted in blue in each plot. If, on the other hand, the observer bases his or her lightness estimate solely on luminance of the test patch without taking the orientation into account, then the ratio would always be constant (the horizontal black line). 
Figure 8
 
Results. The observer’s geometric discounting functions Γ versus perceived test surface orientation, Image not available. Results for six observers are shown. If the observer does not correct for the perceived orientation of the surface, then the resulting points should lie on a horizontal line. A black horizontal line marks the mean of each observer’s geometric discounting function. The red curve is the result of a maximum likelihood fit of the Lambertian geometric discounting function Image not available to the observer’s performance, allowing for the possibility that observer’s estimates of the equivalent relative punctate intensity, , or direction to the illuminant, Image not available, are not correct. The blue curve is a plot of the geometric discounting function Image not available with the correct value of π for the experimental conditions but with the observers’ estimates of Image not available and Image not available. Small blue upward arrows mark the correct light direction, ψP, and downward red arrows mark the observers’ estimates, Image not available.
Figure 8
 
Results. The observer’s geometric discounting functions Γ versus perceived test surface orientation, Image not available. Results for six observers are shown. If the observer does not correct for the perceived orientation of the surface, then the resulting points should lie on a horizontal line. A black horizontal line marks the mean of each observer’s geometric discounting function. The red curve is the result of a maximum likelihood fit of the Lambertian geometric discounting function Image not available to the observer’s performance, allowing for the possibility that observer’s estimates of the equivalent relative punctate intensity, , or direction to the illuminant, Image not available, are not correct. The blue curve is a plot of the geometric discounting function Image not available with the correct value of π for the experimental conditions but with the observers’ estimates of Image not available and Image not available. Small blue upward arrows mark the correct light direction, ψP, and downward red arrows mark the observers’ estimates, Image not available.
Because we explicitly estimated ΨT by asking the observer to perform the orientation task, we are left with possible errors in estimating πeq and the direction of the light ΨP as explanations for patterns observed in the data. We used a maximum likelihood fitting procedure to estimate values of these parameters that best accounted for each observer’s data separately. These estimates are shown in Table 1. First note that all observers apparently underestimate the equivalent relative punctate intensity of the light (whose actual value is πeq = 0.62). We tested the hypothesis that the observer’s estimate Image not available is equal to the true value by means of a nested hypothesis test (Mood, Graybill, & Boes, 1974, pp. 440). We nested the hypothesis that Image not available = 0.62 within a model in which Image not available was free to vary. We fit both models to the data by the method of maximum likelihood with other parameters allowed to vary freely. The log likelihood of the constrained model (denoted λ0) must be less than or equal to that of the unconstrained model (denoted λ1). Under the null hypothesis, twice the difference in log likelihoods is asymptotically distributed as a χ12-variable  
(15)
and we use this result to test whether the observers’ estimates Image not available were significantly different from the true value. We separately tested whether Image not available (consistent with luminance matching) by a second application of the nested hypothesis test. We tested for each observer separately with the overall Type I error set to 0.05 and a Bonferroni correction for each series of six tests. 
Table 1
 
Experiment 1. Estimates for each Observer
Table 1
 
Experiment 1. Estimates for each Observer
ObserverSlopeImage not availableImage not availableDiscounting index
Veridical values10.62−21.441
CBC0.85*0.20* p < .0001−29.51 p = .4570.48
EPB0.89*0.15*p < .0001−57.86*p = .0020.38
JJG0.84*0.32* p < .001−30.16 p = .1910.68
LR0.84*0.30* p < .0001−31.46 p = .1550.65
MM0.96*0.15* p < .0001−41.97 p = .0380.38
NB0.63*0.29 p = .015−28.65 p = .5090.64
 

We report the regression coefficients for the perceived orientation of the test patch Image not available in the second column. In columns 3 and 4, maximum likelihood estimations of the punctate to total light ratio, , and punctate light direction, Image not available, are reported. For each observer, we tested the hypothesis that Image not available and report exact p-values for this test when the values are larger than 0.001. With a Bonferroni correction for six tests, the significant level corresponding to an overall Type I error rate of 0.05 is 0.0083. Values whose corresponding p-values fall below this cutoff are marked with an asterisk. The last column reports the discounting index DI (Equation 16).

We rejected the hypothesis that Image not available for all observers but observer NB (the p values of the tests are reported in Table 1). We rejected the hypothesis that Image not available for all observers (p < .001 in all cases). The latter result implies that observers are not simply matching the luminance values of the reference patch to that of the test patch. The former indicates that (with the exception of one observer) the observers’ estimates of the equivalent relative punctate intensity are in error. 
We next examined whether observers were accurately estimating the direction to the punctate light source, also by nested hypothesis tests. Image not available = −21.4°, (the true value) within a model in which Image not available was free to vary. With the exception of observer EPB, the observers’ Image not available estimates were not significantly different from the true value. All of these results are tabulated in Table 1. We have plotted the estimates of Image not available for all observers in Figure 9. While all but one of the estimates fall near the true angle, ψP = −21.44°, they are all on one side of the true value, suggesting that observers share a common bias in estimating the light direction. 
Figure 9
 
Estimates of perceived light directions, Image not available. Each observer’s estimated perceived punctate light direction is shown. The angle between each red solid line and the z-axis represent Image not available for each observer. Only for observer MM is the perceived punctate light direction Image not available statistically significantly different from the true value.
Figure 9
 
Estimates of perceived light directions, Image not available. Each observer’s estimated perceived punctate light direction is shown. The angle between each red solid line and the z-axis represent Image not available for each observer. Only for observer MM is the perceived punctate light direction Image not available statistically significantly different from the true value.
Geometric Discounting Indexes
In scenes containing a punctate light source, the observer may err in estimating the direction to the punctate light source. He or she may also err in estimating πeq. If, for example, the observer estimates Image not available to be 0, then Image not available is always 1 and the observer does not discount orientation at all. We define a geometric discounting index,  
(16)
that measures the match between the true value πeq and the observer’s estimate Image not available. This index ranges from 0 (no discounting of orientation) to 1 (perfect discounting). The DI values for all six observers are reported in Table 1. This index ignores any errors in the observer’s estimate of the punctate light direction and, together with the error in direction Image not available, these two indices provide a measure of how accurately the observer is discounting orientation in forming lightness judgments. 
Discussion
We have shown how perceived orientation affects perceived albedo in a quantitative way. To do so, we asked observers to estimate both the orientation and lightness of gray test patches placed in a complex, 3D scene. We found that observers partially discount perceived surface orientation in estimating surface albedo. Our results are consistent with the hypothesis that observers are correctly discounting perceived orientation, but that, in this experiment, they made use of erroneous estimates of the direction to the punctate light source and the relative intensity of punctate and diffuse illumination. 
In previous studies, conflicting cues were used to induce changes in perception of the orientation of a surface as an observer shifted from monocular to binocular viewing. The typical conclusion drawn is that perceived orientation has little or no effect on perceived albedo (Epstein, 1961; Flock & Freedberg, 1970; Hochberg & Beck, 1954; Redding & Lester, 1980). In these experiments, changes in orientation are confounded with (a) monocular versus binocular viewing, (b) the kind of depth cues used in estimating orientation, and (c) the presence or absence of cue conflict. Any of these factors may be responsible for the observed lack of effect of orientation. We find, in contrast, that observers substantially discount orientation in forming estimates of surface albedo. Our results are in agreement with those of Gilchrist (1977, 1980). 
Previous experiments typically had only two distinct orientation conditions (typically horizontal and vertical), and it is therefore difficult to draw any conclusions about the agreement of their performance with the predictions of Equation 5 developed above (the Lambertian observer). We were able to estimate the effect of seven different orientations on the apparent albedo of two test patches with different albedos (DARK and LIGHT). We found good agreement between observers’ data and the family of functions Image not available that describe the Lambertian observer. 
We also note that these researchers did not separately measure perceived orientation. While it is plausible that observers perceived large changes in orientation in different conditions of their experiments, we note reports by Flock and Freedberg (1970) that some observers did not see the intended changes in orientation with shifts from monocular to binocular viewing. In our experiment, observers’ orientation estimations were close to veridical, and they exhibited moderate lightness constancy in response to changes in the orientation of the test patch. 
We derived estimates from the observers’ data of one component of the light direction, Image not available, and the equivalent relative punctate intensity Image not available and tested whether these estimates deviated from the true values. We found that, for five out of six observers, there was no significant difference between Image not available and the true value, ΨP. In contrast, we found that Image not available was significantly smaller than the true value Image not available by a factor of two or more for five observers, and smaller (but not significantly so) for the remaining observer. This result indicates that observers systematically underestimated the contribution of the punctate source to the light re-emitted by the test surface. 
We have repeated our analysis assuming that the observers’ internal model of reflection of light from matte surfaces could be other than Lambertian. We fitted the empirical data to a different form of the observer’s geometrical discounting function assuming the observers’ internal model of reflection from matte surfaces was the Oren-Nayar model (Nayar & Oren, 1995; Oren & Nayar, 1995). The Oren-Nayar model includes the Lambertian as a special case. We performed nested hypothesis tests for each observer with the null hypothesis corresponding to the Lambertian model and the alternative hypothesis corresponding to any non-Lambertian form of the Oren-Nayar model. We could not reject the Lambertian model for any observer. 
All of our stimuli were viewed binocularly. If viewed monocularly, then the stimuli would consist of a quadrilateral test surface against the constant gray background of the central cube. The quadrilateral would change shape from trial to trial, but it would remain embedded in the surround of one side of the central cube. Theories of lightness constancy that are framed in terms of local operations on single retinal images (lateral inhibition, edge contrast) cannot explain our results. 
We also found that we could derive an estimate of the punctate light source direction ΨP from observers’ data that was within 22° of the correct direction for five out of six observers. This result suggests that the visual system is effectively estimating information about the spatial organization of the illuminant and using it to arrive at estimates of surface albedo (see Maloney, 2002). 
For all but one observer, we could reject the hypothesis that Image not available = πeq. Recall that Image not available and an error in estimating Image not available could be the result of misestimating EP or cosϕP or both. Because the estimates of the other angle component Image not available of the punctate light direction were not very different than the correct value (for five out of six of the observers), we can conjecture that Image not available is close to the correct value, and that Image not available is not very different than , implying that observers perceive the lighting in the scene to have a larger component of diffuse light than it does. It would be of great interest to determine what cues in the scene influence the estimates of illuminant properties such as punctate source direction and relative punctate intensity. 
Acknowledgments
This research was funded in part by Grant EY08266 from the National Institute of Health. LTM was also supported by grant RG0109/1999-B from the Human Frontiers Science Program. We thank Michael Landy for comments on earlier drafts. Commercial relationships: none. 
References
Arend, L. Reeves, A. (1986). Simultaneous color constancy, Journal of the Optical Society of America A, 3, 1743–1751. [PubMed] [CrossRef]
Bloj, M. G. Kersten, D. Hurlbert, A. C. (1999). Perception of three-dimensional shape influences colour perception through mutual illumination. Nature, 402, 877–879. [PubMed] [PubMed]
Cornsweet, T. (1970). Visual Perception. New York: Academic Press.
Epstein, W. (1961). Phenomenal orientation and perceived achromatic color. Journal of Psychology, 52, 51–53. [CrossRef]
Flock, H. R. Freedberg, E. (1970). Perceived angle of incidence and achromatic surface color. Perception & Psychophysics, 8, 251–256. [CrossRef]
Gilchrist, A. L. (1977). Perceived lightness depends on spatial arrangement. Science, 195, 185–187. [PubMed] [CrossRef] [PubMed]
Gilchrist, A. L. (1980). When does perceived lightness depend on perceived spatial arrangement? Perception & Psychophysics, 28, 527–538. [PubMed] [CrossRef] [PubMed]
Gilchrist, A. Kossyfidis, C. Bonato, F. Agostini, T. Cataliotti, J. Li, X. J. Spehar, B. Annan, V. Economou, E. (1999). An anchoring theory of lightness perception. Psychological Review, 106, 795–834. [PubMed] [CrossRef] [PubMed]
Hochberg, J. E. Beck, J. (1954). Apparent spatial arrangements and perceived brightness. Journal of Experimental Psychology, 47, 263–266. [CrossRef] [PubMed]
Kardos, L. (1934). Ding und Schatten; Eine Experimentelle Untersuchung über die Grundlagen des Farbsehens [Object and Shadow; An Experimental Investigation of the Fundamentals of Color Vision]. In Schumann, F. Jaensch, E.R. Kroh, O. (Eds.), Zeitschrift für psychologie and physiologie der sinnesorgane, ergänzungsband 23. Germany: Verlag.
Knill, D.C. Kersten, D. (1991). Apparent surface curvature affects lightness perception. Nature, 351, 228–230. [PubMed] [CrossRef] [PubMed]
Larson, G. W. and Shakespeare, R. (1996). Rendering with radiance; The art and science of lighting and visualization. San Francisco: Morgan Kaufmann Publishers, Inc.
Maloney, L. T. (2002). Illuminant estimation as cue combination. Journal of Vision, 2(6), 493–504, http://journalofvision.org/2/6/6/, DOI 10.1167/2.6.6. [PubMed] [Article] [CrossRef] [PubMed]
Mood, A. M. Graybill, F. A. Boes, D. C. (1974). Introduction to the theory of statistics (3rd ed.). New York: McGraw-Hill.
Oren, M. Nayar, S. K. (1995). Generalization of the Lambertian Model and implications for machine vision. International Journal of Computer Vision, 14, 227–251. [CrossRef]
Nayar, S. K. Oren, M. (1995). Visual appearance of matte surfaces. Science, 267, 1153–1156. [PubMed] [CrossRef] [PubMed]
Pessoa, L. Mingolla, E. Arend, L. E. (1996). The perception of lightness in 3-D curved objects. Perception & Psychophysics, 58, 1293–1305. [PubMed] [CrossRef] [PubMed]
Redding, G. M. Lester, C. F. (1980). Achromatic color matching as a function of apparent test orientation, test and background luminance, and lightness or brightness instructions. Perception & Psychophysics, 27, 557–563. [PubMed] [CrossRef] [PubMed]
Scheifler, R. W. Gettys, J. (1996). X window system; Core library and standards. Boston: Digital Press.
Schirillo, J. A. Shevell, S. K. (1993). Lightness and brightness judgments of coplanar retinal noncontiguous surfaces. Journal of the Optical Society of America A, 10, 2442–2452. [PubMed] [CrossRef]
Figure 1
 
A simple scene. A small region of a Lambertian surface is illuminated by a combination of a punctate light source and a diffuse light source. The mean intensity of light re-emitted from the surface is determined by the surface albedo α, the intensity of the diffuse light source, ED, the intensity of the punctate light source, EP, and the angle ϑ between the surface normal N and the direction to the punctate source. It does not depend on ν, the direction to the viewer (so long as the viewer can see the surface region) because light absorbed by a small Lambertian surface is re-emitted uniformly in a hemisphere centered on the patch.
Figure 1
 
A simple scene. A small region of a Lambertian surface is illuminated by a combination of a punctate light source and a diffuse light source. The mean intensity of light re-emitted from the surface is determined by the surface albedo α, the intensity of the diffuse light source, ED, the intensity of the punctate light source, EP, and the angle ϑ between the surface normal N and the direction to the punctate source. It does not depend on ν, the direction to the viewer (so long as the viewer can see the surface region) because light absorbed by a small Lambertian surface is re-emitted uniformly in a hemisphere centered on the patch.
Figure 2
 
A stereo image pair. The stimuli were computer rendered 3D scenes presented binocularly in a Wheatstone stereoscope. All scenes contained a matte, gray central cube with an attached matte test surface, “hinged” to one side of the cube. The central cube was always displayed in the same orientation and with the same albedo, and roughly in the same location. From trial to trial, either the orientation of the test patch, its albedo, or both could vary. Each scene also contained several additional objects, specular, matte, or both, that were varied from trial to trial. To form a stereo pair, each scene was rendered twice with different view points corresponding to the position of the eyes of the observer. The stereo pair is displayed for uncrossed-fusion (use the left-hand pair), once for crossed-fusion (use the right hand pair).
Figure 2
 
A stereo image pair. The stimuli were computer rendered 3D scenes presented binocularly in a Wheatstone stereoscope. All scenes contained a matte, gray central cube with an attached matte test surface, “hinged” to one side of the cube. The central cube was always displayed in the same orientation and with the same albedo, and roughly in the same location. From trial to trial, either the orientation of the test patch, its albedo, or both could vary. Each scene also contained several additional objects, specular, matte, or both, that were varied from trial to trial. To form a stereo pair, each scene was rendered twice with different view points corresponding to the position of the eyes of the observer. The stereo pair is displayed for uncrossed-fusion (use the left-hand pair), once for crossed-fusion (use the right hand pair).
Figure 3
 
Coordinate systems. A. Cartesian and spherical coordinates. The origin of the coordinate system was the center of the test surface. Cartesian coordinates are specified as (x, y, z) vectors. The z-axis lies along the line of sight from the observer to the origin. The y-axis is vertical, and the x-axis horizontal, both in the fronto-parallel plane containing the origin. The positive side of the x-axis is to the observer’s right, of the y-axis is upward, and of the z-axis is toward the observer. The spherical coordinate system (Ψ,ϕ,r) is the angle between the z-axis and the projection of any vector into the xz-plane. The angle ϕ is the angle between a vector and the xz-plane. The radius r is measured from the origin in centimeters. N is the unit normal to the surface; P is a vector in the direction of the punctate source. B. Spherical coordinates and the test surface. The viewpoint is from above, along the y-axis, which is shown with the symbol ⊙, pointing out of the page. The central cube is shown in outline. The possible orientations of the test patch (ΨT = {s−50°, −40°, −30°, 0°, 30°, 40°, 50°}) are shown as well as the direction to the projection of the punctate light source onto the xz-plane. The origin of the coordinate system is always in the middle of the test patch, the observer is always at (x, y, z)=(0, 0, 70 cm), which is (Ψ, ϕ, r) = (0°, 0°, 70 cm) in spherical coordinates, and the punctate light source is always at (x, y, z)= (−55 cm, 40 cm, 140 cm), which is (Ψ,ϕ,r)=(−21.44°, 14.89°, 155.64 cm) in spherical coordinates. To maintain the center of the test patch at the origin, the central cube is displaced slightly from trial to trial depending on the orientation of the test patch being displayed.
Figure 3
 
Coordinate systems. A. Cartesian and spherical coordinates. The origin of the coordinate system was the center of the test surface. Cartesian coordinates are specified as (x, y, z) vectors. The z-axis lies along the line of sight from the observer to the origin. The y-axis is vertical, and the x-axis horizontal, both in the fronto-parallel plane containing the origin. The positive side of the x-axis is to the observer’s right, of the y-axis is upward, and of the z-axis is toward the observer. The spherical coordinate system (Ψ,ϕ,r) is the angle between the z-axis and the projection of any vector into the xz-plane. The angle ϕ is the angle between a vector and the xz-plane. The radius r is measured from the origin in centimeters. N is the unit normal to the surface; P is a vector in the direction of the punctate source. B. Spherical coordinates and the test surface. The viewpoint is from above, along the y-axis, which is shown with the symbol ⊙, pointing out of the page. The central cube is shown in outline. The possible orientations of the test patch (ΨT = {s−50°, −40°, −30°, 0°, 30°, 40°, 50°}) are shown as well as the direction to the projection of the punctate light source onto the xz-plane. The origin of the coordinate system is always in the middle of the test patch, the observer is always at (x, y, z)=(0, 0, 70 cm), which is (Ψ, ϕ, r) = (0°, 0°, 70 cm) in spherical coordinates, and the punctate light source is always at (x, y, z)= (−55 cm, 40 cm, 140 cm), which is (Ψ,ϕ,r)=(−21.44°, 14.89°, 155.64 cm) in spherical coordinates. To maintain the center of the test patch at the origin, the central cube is displaced slightly from trial to trial depending on the orientation of the test patch being displayed.
Figure 4
 
The experimental apparatus. Stimuli were displayed in a computer-controlled Wheatstone stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors of the stereoscope. The observer viewed them by means of small mirrors placed in front of his or her eyes. In the fused image, the test surface appeared approximately 70 cm in front of the observer. This distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation cues and other depth cues.
Figure 4
 
The experimental apparatus. Stimuli were displayed in a computer-controlled Wheatstone stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors of the stereoscope. The observer viewed them by means of small mirrors placed in front of his or her eyes. In the fused image, the test surface appeared approximately 70 cm in front of the observer. This distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation cues and other depth cues.
Figure 5
 
Orientation and lightness matching tasks. A. Orientation task. The observer was asked to adjust a monocular circle-and-stick gradient probe until the circle fell within the plane of the test surface and the stick was orthogonal to the test surface. The gradient probe was presented monocularly, to the right eye. B. Lightness matching. The observer was asked to select the reference surface on a lightness scale that matched the test surface in “surface material.” The order of reference surfaces on the lightness scale was re-randomized on every trial. The lightness scale was presented monocularly, to the right eye.
Figure 5
 
Orientation and lightness matching tasks. A. Orientation task. The observer was asked to adjust a monocular circle-and-stick gradient probe until the circle fell within the plane of the test surface and the stick was orthogonal to the test surface. The gradient probe was presented monocularly, to the right eye. B. Lightness matching. The observer was asked to select the reference surface on a lightness scale that matched the test surface in “surface material.” The order of reference surfaces on the lightness scale was re-randomized on every trial. The lightness scale was presented monocularly, to the right eye.
Figure 6
 
Results: Orientation settings. Results for six observers are shown. Each observer’s mean orientation settings Image not available are plotted against the corresponding true orientations ΨT, using filled squares for DARK tests and empty squares for LIGHT ones. The blue solid line has unit slope. The settings of a veridical observer would fall on that line. The red dashed lines show the linear regression fit to the observer’s settings. The slope of the fit for each observer is given in the plots. All slopes are significantly different from 1 (with an overall Type I error rate of 0.05 and a Bonferroni correction for six multiple tests).
Figure 6
 
Results: Orientation settings. Results for six observers are shown. Each observer’s mean orientation settings Image not available are plotted against the corresponding true orientations ΨT, using filled squares for DARK tests and empty squares for LIGHT ones. The blue solid line has unit slope. The settings of a veridical observer would fall on that line. The red dashed lines show the linear regression fit to the observer’s settings. The slope of the fit for each observer is given in the plots. All slopes are significantly different from 1 (with an overall Type I error rate of 0.05 and a Bonferroni correction for six multiple tests).
Figure 7
 
Geometric discounting function. We plot Γ(ΨT − ΨP;π) for different values of ΨP (the direction of projection of the punctate light source on the xz-plane) and π (the relative punctate intensity). In both A and B, blue solid lines show the theoretical curves obtained from Equation 3 for the correct values in the experiment reported. In contrast, if π = 0 (the light is diffuse only), then the geometric discounting function is a constant, 1, shown by the black dashed line in B. Intuitively, the lightness constant observer should assign a higher albedo for the same luminance as a surface is rotated away from the punctate light source. The function Γ(ΨT − ΨP;π) specifies how much greater the assigned albedo should be. Misestimation of the punctate light direction, ΨP, moves the curve left or right, as shown by the red lines in A. Misestimation of the relative punctate intensity, π, changes the curvature of the function, as shown by red dashed lines in B.
Figure 7
 
Geometric discounting function. We plot Γ(ΨT − ΨP;π) for different values of ΨP (the direction of projection of the punctate light source on the xz-plane) and π (the relative punctate intensity). In both A and B, blue solid lines show the theoretical curves obtained from Equation 3 for the correct values in the experiment reported. In contrast, if π = 0 (the light is diffuse only), then the geometric discounting function is a constant, 1, shown by the black dashed line in B. Intuitively, the lightness constant observer should assign a higher albedo for the same luminance as a surface is rotated away from the punctate light source. The function Γ(ΨT − ΨP;π) specifies how much greater the assigned albedo should be. Misestimation of the punctate light direction, ΨP, moves the curve left or right, as shown by the red lines in A. Misestimation of the relative punctate intensity, π, changes the curvature of the function, as shown by red dashed lines in B.
Figure 8
 
Results. The observer’s geometric discounting functions Γ versus perceived test surface orientation, Image not available. Results for six observers are shown. If the observer does not correct for the perceived orientation of the surface, then the resulting points should lie on a horizontal line. A black horizontal line marks the mean of each observer’s geometric discounting function. The red curve is the result of a maximum likelihood fit of the Lambertian geometric discounting function Image not available to the observer’s performance, allowing for the possibility that observer’s estimates of the equivalent relative punctate intensity, , or direction to the illuminant, Image not available, are not correct. The blue curve is a plot of the geometric discounting function Image not available with the correct value of π for the experimental conditions but with the observers’ estimates of Image not available and Image not available. Small blue upward arrows mark the correct light direction, ψP, and downward red arrows mark the observers’ estimates, Image not available.
Figure 8
 
Results. The observer’s geometric discounting functions Γ versus perceived test surface orientation, Image not available. Results for six observers are shown. If the observer does not correct for the perceived orientation of the surface, then the resulting points should lie on a horizontal line. A black horizontal line marks the mean of each observer’s geometric discounting function. The red curve is the result of a maximum likelihood fit of the Lambertian geometric discounting function Image not available to the observer’s performance, allowing for the possibility that observer’s estimates of the equivalent relative punctate intensity, , or direction to the illuminant, Image not available, are not correct. The blue curve is a plot of the geometric discounting function Image not available with the correct value of π for the experimental conditions but with the observers’ estimates of Image not available and Image not available. Small blue upward arrows mark the correct light direction, ψP, and downward red arrows mark the observers’ estimates, Image not available.
Figure 9
 
Estimates of perceived light directions, Image not available. Each observer’s estimated perceived punctate light direction is shown. The angle between each red solid line and the z-axis represent Image not available for each observer. Only for observer MM is the perceived punctate light direction Image not available statistically significantly different from the true value.
Figure 9
 
Estimates of perceived light directions, Image not available. Each observer’s estimated perceived punctate light direction is shown. The angle between each red solid line and the z-axis represent Image not available for each observer. Only for observer MM is the perceived punctate light direction Image not available statistically significantly different from the true value.
Table 1
 
Experiment 1. Estimates for each Observer
Table 1
 
Experiment 1. Estimates for each Observer
ObserverSlopeImage not availableImage not availableDiscounting index
Veridical values10.62−21.441
CBC0.85*0.20* p < .0001−29.51 p = .4570.48
EPB0.89*0.15*p < .0001−57.86*p = .0020.38
JJG0.84*0.32* p < .001−30.16 p = .1910.68
LR0.84*0.30* p < .0001−31.46 p = .1550.65
MM0.96*0.15* p < .0001−41.97 p = .0380.38
NB0.63*0.29 p = .015−28.65 p = .5090.64
 

We report the regression coefficients for the perceived orientation of the test patch Image not available in the second column. In columns 3 and 4, maximum likelihood estimations of the punctate to total light ratio, , and punctate light direction, Image not available, are reported. For each observer, we tested the hypothesis that Image not available and report exact p-values for this test when the values are larger than 0.001. With a Bonferroni correction for six tests, the significant level corresponding to an overall Type I error rate of 0.05 is 0.0083. Values whose corresponding p-values fall below this cutoff are marked with an asterisk. The last column reports the discounting index DI (Equation 16).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×