Both non-Lambertian shading, specularities in particular, and occluding contours have ill-matched binocular disparities. For example, the disparities of specularities depend not only on a surface's position but also on its curvature. Shading and contour disparities do in general not specify a point on the surface. I investigated how shading and contours contribute to perceived shape in stereoscopic viewing. Observers adjusted surface attitude probes on a globular object. In Experiment 1, the object was either Lambertian or Lambertian with added specularities. In the next experiment, I removed the Lambertian part of the shading. In Experiment 1, I reduced the disparity of the contour to zero, and in Experiment 4, I removed both cues. There was little effect of shading condition in Experiment 1. Removing the Lambertian shading in Experiment 2 rendered the sign of the surface ambiguous (convex/concave) although all surfaces were perceived as curved. Results in Experiment 1 were similar to those in Experiment 1. Removing both cues in Experiment 4 made all surfaces appear flat for three observers and convex for one observer. I conclude that in the absence of Lambertian shading, observers have categorically different perceptions of the surface depending on whether disparate specular highlights and disparate contours are present or not.

^{1}of specular highlights in particular, and the disparities of the occluding contour of an object as potentially useful cues for three-dimensional shape recovery. Occluding contours, for example, are usually clearly identifiable with sharp boundaries in the two half-images. However, both disparate shading and disparate contours have their limitations and are in general ambiguous cues to the three-dimensional shape of the object. That is, these disparities do not correspond to the position the object occupies in space but depend also on other factors such as the surface's curvature. The conventional shape-from-disparity algorithms do therefore not apply and may in fact give erroneous results. Despite the vast amount of knowledge on stereoscopic shape perception of objects defined by texture, the way the visual system extracts shape information from smoothly shaded and glossy objects in stereoscopic viewing is less advanced. In this paper, I investigate the separate and combined effects of Lambertian shading, specular highlights, and the occluding contour in stereoscopic shape perception.

*R*

^{2}'s were low, the correlation of the slant components of the gauge figure settings was also improved when the objects were viewed in stereo. Todd, Norman, Koenderink, and Kappers (1997) also reported better performance on shape tasks for stereoscopic viewing of specular objects as well as a scaling effect on the perceived shape when specular highlights were present. Bülthoff and Mallot (1988) found that more observers perceived more depth when disparate shading was used than when identical shading was used for both eyes.

*L*

_{Total}) is the linear summation of an ambient (

*L*

_{Ambient}), Lambertian (

*L*

_{Lambertian}), and specular component (

*L*

_{Specular}). In these experiments, the ambient component is set to 15% of the monitor's maximum radiance. The Lambertian component is the vector product of the surface normal

**n**and the illumination direction

**s**, multiplied by an attenuation factor

*g*. Here,

*g*is set to 0.8; that is, 80% of the monitor's maximum radiance. The specular component is the dot product of the normalized sum of the viewing direction

**v**and the illumination direction

**s**, and the surface normal

**n**. The specular exponent

*e*is set to 125. For the matte objects, the specular attenuation factor

*h*is set to zero, and for the glossy objects

*h*is set to 1, that is 100% of the monitor's maximum radiance. I simulated a collimated parallel light source. The light was coming from the upper-right region behind the observer and its color was white. The illumination direction was the same relative to the observer for all conditions. I used three different orientations of the object: the object was rotated 0, 35 or 70 degrees anticlockwise around the cyclopean line of sight. The object measured about six degrees of visual angle in each half-image on the computer screen. The background was gray.

*δ*z/

*δ*x,

*δ*z/

*δ*y}. From a set of depth gradients, it is possible to calculate a best-fitting surface. I describe the algorithm used, along with a numerical example, in the 1.

*screen unit*and the right edge +1

*screen unit*. Depending on the experimental condition, the maximum horizontal distance between gauge figure locations was 1.12, 1.10, and 1.0 screen units for the 0, 35, and 70 degree rotation conditions, respectively. The maximum vertical distance between gauge figure locations was 0.96, 0.98, and 1.10 screen units. These horizontal and vertical measures of the depicted objects are needed to form an idea of the width (or height) to depth ratio of the reconstructed surface.

*x, y*) of the gauge figure in the image plane as additional predictor variables.

*z*and

*z*′ are the depth values of the first and the second surface, respectively. The horizontal and vertical coordinates in the image plane are denoted with

*x*and

*y*. A translation in depth is mediated by the constant

*d*;

*b*and

*c*are the shearing parameters;

*a*scales the surface in depth.

*R*

^{2}values are invariant for translation and scaling in depth in the straight regression model; in the affine regression model,

*R*

^{2}values are in addition to invariance for translation and scaling also invariant for shearing transformations.

*R*

^{2}'s between sessions, when averaged over all combinations of two sessions and over all six conditions, ranged for straight regression from 0.71 to 0.92 for different observers; the mean

*R*

^{2}across observers was 0.82. The mean standard deviation of the scaling factors between sessions across observers, when settings were averaged over all conditions, was about 18%. This standard deviation is an indication of the amount of scaling between sessions; the mean scaling factor is always close to 1 because for example if session 1 is scaled 0.9 with respect to session 2, session 2 is scaled 1/0.9 with respect to session 1. Hence, on average close to 1. With the affine regression model average

*R*

^{2}values improved to a range between 0.89 and 0.96 with the mean

*R*

^{2}across observers being 0.92. The standard deviation of the scaling factors between sessions was again about 18% with a standard deviation in the shearing factor of {0.05, 0.04}. The arctangent of the length of a shearing vector converts the vector to an angle, in this case thus about 3.6 degrees. Since most of the differences seem to be explained by linear transformations, I averaged the depth values over all sessions for each observer in the remainder of these analyses. There was considerable depth in all reconstructed surfaces. The perceived depth, as indicated by the difference between the highest and lowest depth values, ranged from to 0.17 screen units for observer A3 in the “Matte/0 degree” condition to 0.39 screen units for observer A4 in the “Matte/45 degree” condition.

*R*

^{2}values for comparisons between observers are lower than between sessions of the same observer. They ranged from 0.45 to 0.85 between different observers for the straight model when averaged over all conditions. The mean

*R*

^{2}was 0.60. The standard deviation of the scaling factors was 31%.

*R*

^{2}values ranged between 0.57 and 0.89 with the mean being 0.75 for the affine regression model. The standard deviation of the scaling factors was about 24% and the shearing vector about {0.07, 0.02}. I did not average probe settings over different observers in subsequent analyses in this section.

*R*

^{2}'s for the correlation between the depth values for different BRDF's ranged from 0.90 to 0.99. With the affine model

*R*

^{2}values ranged between 0.95 and 0.99. Results, averaged over observers, along with the best estimates of the model parameters are summarized in Table 1. I tested the significance of the best parameter estimates with one-sample

*T*-tests. Scaling parameters that are significantly different from 1, and shearing parameters significantly different from 0 are indicated with asterisks in Table 1. I also calculated the significance of the improvement in

*R*

^{2}'s for the affine regression model over the straight model. Significant improvements in

*R*

^{2}'s are also indicated in Table 1.

Comparison | Straight model, z′ = az + d | Affine model, z′ = az + bx + cy + d | |||||
---|---|---|---|---|---|---|---|

R ^{2} | a | R ^{2} | a | b | c | ||

Matte-glossy | 0 | 0.93 (0.01) | 0.97 (0.13) | 0.97 (0.02) | 1.02 (0.08) | 0.02 (0.03) | 0.02 (0.03) |

35 | 0.93 (0.02) | 0.89 (0.13) | 0.97 (0.01) | 0.99 (0.13) | 0.03 (0.01) | 0.01 (0.02) | |

70 | 0.97 (0.02) | 1.01 (0.06) | 0.98 (0.01) | 0.97 (0.09) | −0.01 (0.03) | −0.00 (0.01) | |

0–35 | Matte | 0.79 (0.15) | 0.93 (0.09) | 0.95 (0.04) | 0.88** (0.03) | −0.06* (0.03) | −0.05 (0.03) |

Glossy | 0.79 (0.06) | 0.85 (0.05) | 0.94 (0.03) | 0.85* (0.06) | −0.04 (0.03) | −0.06* (0.01) | |

35–70 | Matte | 0.86 (0.06) | 0.97 (0.20) | 0.90 (0.07) | 0.95 (0.16) | −0.02 (0.02) | 0.02 (0.04) |

Glossy | 0.83 (0.07) | 1.06 (0.20) | 0.92 (0.07) | 0.95 (0.17) | −0.05 (0.04) | 0.01 (0.04) | |

70–0 | Matte | 0.67 (0.23) | 0.76 (0.19) | 0.78 (0.18) | 0.94 (0.20) | 0.06** (0.01) | 0.03 (0.03) |

Glossy | 0.57 (0.16) | 0.68* (0.13) | 0.79 (0.15) | 1.01 (0.19) | 0.10* (0.06) | 0.05 (0.04) |

*R*

^{2}'s for the correlations between the depth values for different rotation angles ranged from 0.38 to 0.93. With the affine model

*R*

^{2}'s ranged between 0.58 and 0.98. Results are again summarized in Table 1. In these comparisons,

*R*

^{2}'s are lower than in the previous analyses. This is related to a shift in the exact position of the hyperbolic area (see Figure 5B).

*R*

^{2}'s for straight regression between sessions, when averaged over all conditions, ranged between 0.80 and 0.88 with the mean

*R*

^{2}across observers being 0.81. The standard deviation of the scaling factors, when averaged over all conditions and observers, was about 25%. With the affine regression model,

*R*

^{2}'s improved to a range between 0.86 and 0.93 with the mean

*R*

^{2}across observers being 0.88. The standard deviation in the scaling factors was about 20% and the standard deviation in the shearing vector was {0.03, 0.03}. Since most of the differences between sessions are explained by linear transformations, I averaged the observer's settings over all sessions for each observer in the remainder of the analyses below. The minimum depth range (0.19 screen units) in the surface reconstructions was for observer C4 in the “Matte/0 degrees” condition. Observer C2 had the highest depth range (0.29 screen units) in the “Glossy/0 Degree” condition.

*R*

^{2}'s between observers, when averaged over all conditions, ranged between 0.45 and 0.85, with the mean being 0.60. The standard deviation of the scaling factor between observers was about 31%. With the affine regression model

*R*

^{2}'s improved to a range between 0.57 and 0.89, with the mean being 0.75. The standard deviation in the scaling factor was about 25% and the standard deviation in the shearing vector was {0.07, 0.02}. I did not average the observers' settings over observers in the remainder of the analyses in this section.

*R*

^{2}'s for a straight regression model ranged between 0.84 and 0.99 for different observers with the mean being 0.95. With the affine regression model,

*R*

^{2}'s improved to a range between 0.95 and 0.99 with the mean being 0.97.

*R*

^{2}'s averaged over observers as well as the best-estimate model parameters for a straight and an affine regression model are summarized in Table 2. I tested the best-fit model parameters for their significance with one-sample

*T*-tests. Significance levels are indicated in Table 2 with asterisks. I also calculated the significance of the improvement in

*R*

^{2}for the affine model over the straight model. Significant improvements of

*R*

^{2}'s are indicated with asterisks in Table 2.

Comparison | Straight model, z′ = az + d | Affine model, z′ = az + bx + cy + d | |||||
---|---|---|---|---|---|---|---|

R ^{2} | a | R ^{2} | a | b | c | ||

Matte-glossy | 0 | 0.97 (0.01) | 1.08 (0.10) | 0.97 (0.01) | 1.06 (0.08) | 0.00 (0.01) | 0.01 (0.01) |

35 | 0.93 (0.06) | 0.93 (0.15) | 0.97 (0.02) | 0.99 (0.07) | 0.01 (0.04) | 0.00 (0.03) | |

70 | 0.94 (0.04) | 0.97 (0.11) | 0.97 (0.01) | 0.98 (0.07) | −0.01 (0.02) | 0.01 (0.01) | |

0–35 | Matte | 0.78 (0.20) | 0.88 (0.09) | 0.94 (0.02) | 0.86** (0.04) | −0.03 (0.07) | −0.03 (0.06) |

Glossy | 0.85 (0.09) | 0.81* (0.08) | 0.93 (0.02) | 0.81** (0.06) | −0.02 (0.03) | −0.03 (0.03) | |

35–70 | Matte | 0.83 (0.03) | 0.88 (0.14) | 0.87 (0.02) | 0.83* (0.11) | −0.01 (0.04) | 0.00 (0.02) |

Glossy | 0.81 (0.11) | 0.90 (0.11) | 0.89 (0.01) | 0.85* (0.06) | −0.03 (0.05) | 0.01 (0.02) | |

70–0 | Matte | 0.61 (0.24) | 0.84 (0.36) | 0.74 (0.10) | 1.00 (0.15) | 0.02 (0.09) | 0.03 (0.05) |

Glossy | 0.62 (0.26) | 0.93 (0.39) | 0.73 (0.11) | 1.06 (0.18) | 0.04 (0.06) | 0.03 (0.04) |

*R*

^{2}'s for different comparisons ranged between 0.24 and 0.92 with the mean being 0.75. With the affine regression model,

*R*

^{2}'s improved to range between 0.58 and 0.95 with the mean being 0.85.

*R*

^{2}'s as well as the best-estimate model parameters for a straight and an affine regression model are summarized in Table 2 along with their significance levels.

*R*

^{2}'s would be very low and non-informative because the surfaces are mostly flat.

*R*

^{2}'s. The non-linear differences between observers as reflected in the lower

*R*

^{2}'s are concerned with a shift in the perceived position of the trough and peaks on the object. There is some scaling between matte and glossy conditions. The size of which is not surprising given the differences between surface solutions from different sessions. Apparently, in these data it does not matter much for the perceived depth of the surface whether a specular highlight is present or not. There is somewhat more scaling between different rotation conditions. However, the latter scaling factors also show a larger variance and lower

*R*

^{2}'s between observers because of which the scaling factors do not reach significance in most cases. The same can be said for shearing parameters. There is also virtually no shearing between matte and glossy conditions and somewhat more between different rotations. The latter shearing factors are again associated with higher variances between observers. Although I observed some scaling and shearing effects, these effects are small compared to the variance between sessions. There is also no obvious interaction between rotation angle and the BRDF. The position in depth of the specular highlight has apparently no major effect in this data set.

*R*

^{2};

*R*

^{2}'s did not increase significantly for regressions between matte and glossy conditions. Note that in those cases

*R*

^{2}'s are already in the .90's for the straight model. The lower

*R*

^{2}'s between different rotations are not surprising. I have observed similar non-linear behavior in earlier studies (Nefs et al., 2005).

^{2}to create a surface from a set of depth gradients at specified positions in the image. There are several possible variations on this method, but the principle is the same in all of them. Let us have a set of {

*x, y*} coordinates for the probe positions in a regular triangular tiling pattern as in Figure A1. The set of coordinates is called the set of

*vertices*. Each triangle of three neighboring vertices is called a

*face,*and a straight connection between two neighboring vertices in a face is called an

*edge*. I use a triangular tiling pattern here, but quads, hexagons or even irregular Voronoi tiling patterns would work just as well. I assigned a sequential number to each vertex and created a list of edges with pairs of vertices numbers. I also know the surface depth gradients {

*δz*/

*δx, δz*/

*δy*} at all vertices (because I have measured them) and put them in the same order as the vertices list. The depth gradients used here and the depth map that is calculated from these gradients are shown in Figure A1.

**A**to subtract the (as yet unknown) depth values, for the two vertices of an edge.

**A**has to address all edges once:

*i*and

*j*) of an edge. I denote the depth difference as

*δz*

_{ i,j}. Start by calculating the mean of the gradients (

*γ*

_{1}and

*γ*

_{2}) at the two vertices. Then, take the dot product of the mean gradient, and the distance in the

*x*–

*y*plane between the two vertices as in Equation A2. I do this for all edges and indicate this matrix as

**G**.

**A**, namely one that is filled entirely of 1's: At the other side of the equation I add a zero. This last addition ensures that the sum of all depth values of the surface is zero.

**A**. Since

**A**is not a square matrix we cannot take the inverse, but must take the pseudo-inverse of

**A**. The pseudo-inverse solves the set of equations according to a least-squares principle. It is worth mentioning that this method always comes up with a solution even if you put in nonsense gradients.

^{2}I like to stress that I did not invent this method. Similar methods have been used much earlier (e.g., Koenderink et al., 1992) but the algorithm has never been published in a clear manner in the psychophysical literature before. I present it here merely as a tuitional service for the interested reader.

*Psychological Review*, 101, 414–445. [PubMed] [CrossRef] [PubMed]

*Biological Cybernetics*, 72, 279–293. [PubMed] [CrossRef] [PubMed]

*Nature*, 343, 165–168. [PubMed] [CrossRef] [PubMed]

*Philosophical Transactions of the Royal Society of London B: Biological Sciences*, 331, 237–252. [PubMed] [CrossRef]

*Computational models of visual processing*. (pp. 305–330). Cambridge, MA: The MIT Press.

*Journal of the Optical Society of America A, Optics and Image Science*, 5, 1749–1758. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 63, 1038–1047. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 54, 145–156. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (9):10, 798–820, http://journalofvision.org/4/9/10/, doi:10.1167/4.9.10. [PubMed] [Article] [CrossRef]

*Perception*, 22, 887–897. [PubMed] [CrossRef] [PubMed]

*Perception*, 23, 1335–1337. [PubMed] [CrossRef] [PubMed]

*Ishihara's tests for colour blindness*, 38,

*Perception & Psychophysics*, 52, 487–496. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 50, 838–844. [CrossRef]

*Biological Cybernetics*, 53, 137–151. [PubMed] [CrossRef] [PubMed]

*Perception*, 34, 275–287. [PubMed] [CrossRef] [PubMed]

*Acta Psychologica*, 121, 297–316. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 64, 1145–1159. [PubMed] [CrossRef] [PubMed]

*Psychological Science*, 15, 565–570. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 57, 629–636. [PubMed] [CrossRef] [PubMed]

*International Journal of Computer Vision*, 24, 105–124. [CrossRef]

*Utrecht, The Netherlands: Laméris Instrumenten BV.*.

*Journal of Experimental Psychology: Human perception and Performance*, 9, 583–595. [PubMed] [CrossRef] [PubMed]

*Perception*, 26, 807–822. [PubMed] [CrossRef] [PubMed]

*Psychological Review*, 109, 91–115. [PubMed] [CrossRef] [PubMed]

*Perception*, 35, 145–155. [PubMed] [CrossRef] [PubMed]

*Philosophical Transactions of the Royal Society of London*, 142, 1–17. [CrossRef]

*Image and Vision Computing*, 7, 38–42. [CrossRef]