Free
Research Article  |   January 2007
The effect of viewpoint on perceived visual roughness
Author Affiliations
Journal of Vision January 2007, Vol.7, 1. doi:https://doi.org/10.1167/7.1.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yun-Xian Ho, Laurence T. Maloney, Michael S. Landy; The effect of viewpoint on perceived visual roughness. Journal of Vision 2007;7(1):1. https://doi.org/10.1167/7.1.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In previous work, we examined how the apparent roughness of a textured surface changed with direction of illumination. We found that observers exhibited systematic failures of roughness constancy across illumination conditions for triangular-faceted surfaces where physical roughness was defined as the variance of facet heights. These failures could be due, in part, to cues in the scene that confound changes in surface roughness with changes in illumination. These cues include the following: (1) the proportion of the surface in shadow, (2) mean luminance of the nonshadowed portion, (3) the standard deviation of the luminance of the nonshadowed portion, and (4) texture contrast. If the visual system relied on such “pseudocues” to roughness, then it would systematically misestimate surface roughness with changes in illumination much as our observers did despite the availability of depth cues such as binocular disparity. Here, we investigate observers' judgments of roughness when illumination direction and surface orientation are fixed and the observers' viewpoint with respect to the surface changes. We find a similar pattern of results. Observers exhibited patterned failures of roughness constancy with change in viewpoint, and an appreciable part of their failures could be accounted for by the same pseudocues. While the human visual system exhibits some degree of roughness constancy, our results lead to the conclusion that it does not always select the correct cues for a given visual task.

Introduction
Consider the lemon in Figure 1, photographed under different illuminants (Figure 1a) and from different viewpoints (Figure 1b). The image of the lemon changes markedly as we move the light in the scene and also as we change the camera position. Part of the latter change is due to the three-dimensional (3D) structure of the lemon's skin, its 3D texture. The image of a 3D texture is affected by the interaction of the illuminant and local height variations in the texture. The result is a complex image texture generated by orientation differences between local surface regions, occlusions of one region by a second, and interreflections between regions. The projected image of a small patch of the lemon's surface can vary dramatically with changes in illumination and viewpoint. Yet, our perception of surface material typically remains fairly constant: Despite vast changes in the image of the lemon skin, we do not attribute these changes to physical changes of the lemon itself. 
Figure 1
 
A lemon photographed under three different illumination conditions (a) and from three different viewing angles (b). Images are from the Amsterdam Library of Images (Geusebroek, Burghouts, & Smeulders, 2005).
Figure 1
 
A lemon photographed under three different illumination conditions (a) and from three different viewing angles (b). Images are from the Amsterdam Library of Images (Geusebroek, Burghouts, & Smeulders, 2005).
In a previous study (Ho, Landy, & Maloney, 2006), we tested roughness constancy across illumination conditions for surfaces composed of many similar elements. We chose to use 3D surfaces whose elements were large enough to resolve. As a consequence, the observer could estimate aggregate or statistical measures of the 3D shape, size, and distribution of elements on such “mesoscale” surfaces directly from cues such as binocular disparity and could also make use of texture cues available from the 2D images available to the two retinas. We varied the variance of the heights of the facets that comprised the simulated surface and asked observers to match roughness across illumination conditions. 
We found large systematic failures of constancy with changes in the direction of illumination (Ho et al., 2006). In these experiments, observers compared the perceived roughness of computer-rendered 3D textured surfaces illuminated by a distant punctate light source. The light source could be at one of three different elevations above the surface (50°, 60°, and 70°; azimuth was fixed). A surface was displayed frontoparallel either in isolation or with other objects present in the scene (e.g., glossy spheres) that provided more cues to the illuminant position. Observers consistently judged a surface to be rougher as it was illuminated from a more grazing angle even when more cues to the illuminant position were provided. This was particularly striking because observers should have been able to make the judgment correctly using binocular disparity and other shape cues. 
Others (e.g., Knill, 1990) have proposed computations one might use to determine surface relief of a 3D texture independently of the elevation of the illuminant. However, our psychophysical data suggested that the human visual system sometimes confounds these two variables. 
We also found that when illuminant elevation was varied, observers' roughness judgments correlated well with changes in four illuminant-variant measures based on 2D texture: the proportion of shadows, the mean luminance of nonshadowed facets, the standard deviation of the luminance of nonshadowed facets, and a “texture contrast” measure described by Pont and Koenderink (2005) that we will define below. These measures varied systematically with changes in both physical roughness and illuminant position. Thus, these measures confounded differences in physical roughness with changes in illuminant position. Under conditions of fixed illumination, these measures are valid visual cues to surface roughness, but under the conditions of our experiment, they were not. The failure of roughness constancy we found suggests that observers, in judging mesoscale textures, made use of these “pseudocues” rather than relying solely on whatever illuminant-invariant cues (e.g., binocular disparity) were present in the stimuli, and as a result, they systematically misestimated roughness. 
Belhumeur, Kriegman, and Yuille (1999) showed that the image of a 3D object with Lambertian reflectance is ambiguous: The same image would result from a general class of linear distortions of object shape (including stretches along the line of sight, i.e., increases in depth) when paired with appropriate changes in surface albedo and light source positions. The failures of roughness constancy found by Ho et al. (2006) may be due, in part, to this bas-relief ambiguity. However, we emphasize that these subjects viewed stimuli binocularly, and thus, the availability of binocular disparity cues should have decreased or eliminated bas-relief ambiguity. 
In this study, we used the same mesoscale textures but varied viewing angle while keeping the illuminant position fixed relative to the surface, as if the observer moved to different viewpoints within a static scene. As the observer moved away from a position directly in front of a surface, the image changed due to the introduction of self-occlusions and foreshortening. We might expect failures of roughness constancy if observers continued to rely on the pseudocues identified in the previous study as these cues also changed with viewpoint. 
A number of computational methods have been developed to classify real-world materials photographed under different conditions. These methods, unlike methods from earlier studies, consider the 3D effects of surfaces that are not flat. Cula and Dana (2001) and Leung and Malik (2001) were among the first to use linear filters and a machine learning algorithm to train a system to classify a set of textures under a variety of different illumination and viewing conditions. Schmid (2001) described a similar model using rotation-invariant filters that can account for anisotropic textures. Varma and Zisserman (2002) compared the performance of these three models and a rotation-invariant model based on the maximum filter response across all orientations for each filter type. Again, the intent was to develop a texture-classification algorithm invariant to changes in illumination and viewing conditions. All four models achieved classification rates above 90% over a set of images of 61 grayscale textures from the Columbia-Utrecht (CURet) database, which includes images with variation in 3D texture as well as albedo (Dana, van Ginneken, Nayar, & Koenderink, 1999). Other linear classifiers have been applied to a smaller set of real 3D textures with uniform albedo under known illuminant position varying in azimuth (e.g., Chantler, 1995; Chantler & McGunnigle, 1995) and under unknown illuminant position, resulting in 98% classification accuracy (Penirschke, Chantler, & Petrou, 2002). Unlike the rotation-invariant texture-classification models discussed earlier, these models consider the effect of the rotation of the illuminant direction itself. However, classification rates are reported for changes in illuminant azimuth. More recently, a 60–80% success rate for classification of textures varying in elevation was reported for a computational model applied to 20 PhoTex database textures (http://www.macs.hw.ac.uk/texturelab/database/Photex/) of uniform albedo and shallow relief (Drbohlav & Chantler, 2005). 
It is clear that these machine learning algorithms do a good job of classifying textures. However, it is unknown how well human observers perform at such a classification task under similar conditions. Furthermore, these algorithms were intended for material classification (Is this a wool sweater or a plaster wall?), potentially a very different task from roughness estimation (Is this rough or matted-down wool?). 
Recently, there has been a growing number of studies concerning human perception of surface material properties, in particular the perception of surface gloss. Nishida and Shinya (1998) suggested that the perception of gloss depends on the ability to recover surface reflectance properties from image-based information like the luminance distribution (cf. Fleming, Dror, & Adelson, 2003). The perceptual scaling of gloss has been examined under fixed (Pellacini, Ferwerda, & Greenberg, 2000) and varying illumination conditions (Obein, Knoblauch, & Viénot, 2004). Perception of surface glossiness, unlike roughness, appears to be independent of the direction of illumination (Obein et al., 2004). To our knowledge, there have been no studies that have investigated the effect of viewpoint change on perceived glossiness or perceived roughness. Here, we investigate the effect of viewpoint on perceived roughness for mesoscale, 3D textures. 
Methods
Stimuli
Coordinate systems
We used a Cartesian coordinate system (x, y, z) to define our 3D surfaces (Figure 2). The origin was in the center of the plane on which the stimulus was presented (the stimulus plane). The z-axis was normal to the stimulus plane. It was identical to the viewing direction when the surface was viewed frontoparallel. The x-axis was horizontal and the y-axis was vertical in the stimulus plane. We described the position of the observer and the illuminant as vectors from the origin using a spherical coordinate system (ψ, ϕ, d). The punctate illuminant was located at position (ψ p, ϕ p, d p), and the observer's viewpoint position was (ψ v, ϕ v, d v). Both the viewpoint and illuminant were always in the xz-plane. We used a slightly nonstandard coordinate system by fixing the azimuth ψ at 180°, thus defining elevation ϕ as the angle between the vector and the negative x-axis. Thus, elevation ranged from 0° to 180°.  
Figure 2
 
Coordinate systems. A Cartesian coordinate system was used to define surface patches. The origin of the coordinate system was the center of the surface patch. The z-axis was normal to the stimulus plane. The x-axis was horizontal, and the y-axis was vertical. For the positions of the observer and light source, spherical coordinates (ψ, ϕ, d) were used (where ψ was azimuth, ϕ was elevation, and d was distance in centimeters). Because observers and light sources always lie on the horizontal midline, ψ was fixed at 180°, and ϕ ranged from 0° (the negative x-axis) to 180° (the positive x-axis). The observer was at location (ψ v, ϕ v, 70), and the light source was at location (ψ p, ϕ p, 80).
Figure 2
 
Coordinate systems. A Cartesian coordinate system was used to define surface patches. The origin of the coordinate system was the center of the surface patch. The z-axis was normal to the stimulus plane. The x-axis was horizontal, and the y-axis was vertical. For the positions of the observer and light source, spherical coordinates (ψ, ϕ, d) were used (where ψ was azimuth, ϕ was elevation, and d was distance in centimeters). Because observers and light sources always lie on the horizontal midline, ψ was fixed at 180°, and ϕ ranged from 0° (the negative x-axis) to 180° (the positive x-axis). The observer was at location (ψ v, ϕ v, 70), and the light source was at location (ψ p, ϕ p, 80).
Surface patch
Figure 3 shows how the 3D textured surface patches were constructed. First, an N × N grid of base points was generated in the stimulus plane with width w. The base point coordinates were denoted (X ij , Y ij , Z ij ), 0 ≤ i, jN − 1. Let (U ij x , U ij y , U ij z ) be independent, uniformly distributed random variables on the interval [−1,1]. The base point coordinates were defined as 
X i j = w 2 + ( j + n x y U i j x ) δ Y i j = w 2 + ( i + n x y U i j y ) δ Z i j = r U i j z ,
(1)
where w = 20 cm,
δ = w N 1
, and the grid had N = 20 points on a side. Setting n xy = 0.49 ensured that no facets would overlap or intersect one another in the jittered base grid. The amount of jitter in depth could take on one of eight distinct values r = k 2/16 cm depending on the roughness level k = {1, 2, …, 8}. Thus, the Z ij coordinates were always 4 cm or less in absolute value. The standard deviation of the Z ij coordinates in a surface with roughness level r was
4 3 r / 3
cm. For a flat surface, r = 0, and all Z ij = 0. 
Figure 3
 
Rough-surface model. (a) A 20 × 20 cm regular rectangular grid of 20 × 20 base points was constructed. (b) The base points of the grid were jittered randomly in the xy-plane. (c) Uniform random deviations were added to the base points in the z direction. (d) The surface that was composed of 3D jittered base points was then divided into triangular facets by splitting each 2 × 2 set of base points along a randomly chosen diagonal.
Figure 3
 
Rough-surface model. (a) A 20 × 20 cm regular rectangular grid of 20 × 20 base points was constructed. (b) The base points of the grid were jittered randomly in the xy-plane. (c) Uniform random deviations were added to the base points in the z direction. (d) The surface that was composed of 3D jittered base points was then divided into triangular facets by splitting each 2 × 2 set of base points along a randomly chosen diagonal.
Note that the spacing of the standard deviation of successive roughness levels is not linear in r. In initial testing, we found that linear spacing led to stimuli that were difficult to discriminate at high roughness levels, and we consequently adopted the spacing used here. The 3D surface was constructed from triangular facets. Each set of four neighboring grid points (i, j), (i + 1, j), (i, j + 1), and (i + 1, j + 1) was split into two triangular facets by randomly selecting one of the two diagonals to be connected by an edge. For each value of roughness and viewing angle, four random surfaces were generated to minimize the possibility of observers using intrinsic patterns in the distribution of facets as cues to roughness. The surface patch was then centrally embedded in a smooth 30 × 30 cm surface. To preclude cues that may have resulted from an abrupt change in depths of the edges of the surface patch and the wall, we multiplied a 5-cm border around the surface patch by a raised cosine function to smooth the edges of the surface in depth. 
Light source and viewing angles
Each surface patch was rendered under a diffuse and a punctate illuminant using the RADIANCE software package (Larson & Shakespeare, 1996; Ward, 1994; http://radsite.lbl.gov/radiance/HOME.html). Surfaces were uniform gray and rendered with Lambertian reflectance. Seven viewpoints represented in spherical coordinates as (ψ v, ϕ v, 70 cm) were tested in this study (Figure 4), where the possible values of ϕ v included the surface normal (90°), three viewing angles to the left (30°, 50°, and 70°), and their reflections about the surface normal (110°, 130°, and 150°). The punctate illuminant position was fixed with respect to the test surface at position (180°, 60°, 80 cm). Each scene was rendered twice from slightly different viewpoints (±3 cm, corresponding to an interpupillary distance of 6 cm) for each tested viewing angle corresponding to the positions of the observer's eyes. The scenes were viewed binocularly in the experiment. The punctate–total ratio was 0.62. This is the ratio of the intensity of light absorbed by an infinitesimal Lambertian test patch facing the punctate light source to the intensity of all light absorbed by the patch (Boyaci, Maloney, & Hersh, 2003). It is a measure of the relative intensities of punctate and diffuse sources. A stereo pair of a typical scene is shown in Figure 5, and a representative set of stimuli is shown in Figure 6
Figure 4
 
Viewpoint positions. Seven viewpoints were tested in this study (ϕ v = 30°, 50°, 70°, 90°, 110°, 130°, and 150°). The punctate illuminant was always fixed at (180°, 60°, 80 cm), whereas viewpoint was varied.
Figure 4
 
Viewpoint positions. Seven viewpoints were tested in this study (ϕ v = 30°, 50°, 70°, 90°, 110°, 130°, and 150°). The punctate illuminant was always fixed at (180°, 60°, 80 cm), whereas viewpoint was varied.
Figure 5
 
Stimulus stereo pairs. Three examples of stimuli with roughness level r = 1.56 cm viewed (a) frontoparallel, (b) from ϕ v = 50°, and (c) from ϕ v = 130°. For all viewpoints shown here, the punctate illuminant elevation was fixed at ϕ p = 60°.
Figure 5
 
Stimulus stereo pairs. Three examples of stimuli with roughness level r = 1.56 cm viewed (a) frontoparallel, (b) from ϕ v = 50°, and (c) from ϕ v = 130°. For all viewpoints shown here, the punctate illuminant elevation was fixed at ϕ p = 60°.
Figure 6
 
Sample stimuli. For each roughness level, four surfaces were randomly generated. Here, one of the four is shown for each roughness level r viewed from each of seven viewpoints.
Figure 6
 
Sample stimuli. For each roughness level, four surfaces were randomly generated. Here, one of the four is shown for each roughness level r viewed from each of seven viewpoints.
Apparatus
The left and right images were presented to the corresponding eye of the observer on two 21-in. Sony Trinitron Multiscan GDM-F500 monitors placed to the observer's left and right (Figure 7). The screens on these monitors are close to physically flat, with less than 1 mm of deviation across the surface of each monitor. Two small mirrors were placed directly in front of the observer's eyes. These mirrors reflected the images displayed on the left and right monitors to the corresponding eye of the observer. Lookup tables were used to correct the nonlinearities in the gun responses and to equalize the display values on the two monitors. The lookup tables were prepared after direct measurements of the luminance values on each monitor with a Photo Research PR-650 spectrometer. The maximum luminance achievable on either screen was 114 cd/m2. The stereoscope was contained in a box whose side measures 124 cm. The front face of the box was missing and that is where the observer sat in a chin/head rest. The interior of the box was coated with black flocked paper (Edmund Scientific, Tonawanda, NY) to absorb stray light. Only the stimuli on the screens of the monitors were visible to the observer. The casings of the monitors and any other features of the room were hidden behind the nonreflective walls of the enclosing box. 
Figure 7
 
Experimental apparatus. Stimuli were displayed using a computer-controlled stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors. Observers viewed them via small mirrors placed in front of their eyes. In the fused image, the surface patch appeared approximately 70 cm in front of the observer. The distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation and other distance cues.
Figure 7
 
Experimental apparatus. Stimuli were displayed using a computer-controlled stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors. Observers viewed them via small mirrors placed in front of their eyes. In the fused image, the surface patch appeared approximately 70 cm in front of the observer. The distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation and other distance cues.
Additional light baffles were placed near the observer's face to prevent light from the screens reaching the observer's eyes directly. The optical distance from each of the observer's eyes to the corresponding computer screen was 70 cm. The stimuli were rendered to be 70 cm in front of the observer to minimize any conflict between binocular disparity and accommodation. The monocular fields of view were 55° × 55° each. The observer's eyes were approximately in line with the center of the scene being viewed. 
Software
The experimental software was written in the C programming language. We used the X Window System, Version 11R6 (Scheifler & Gettys, 1996), running under Red Hat Linux 6.1 for graphical display. The computer was a Dell 410 Workstation with a Matrox G450 dual-head graphics card and a special-purpose graphics driver from Xi Graphics that permitted a single computer to control both monitors. The rendered stereo image pair was represented by floating point RGB triplets for each pixel of the image. These triplets were the relative luminance values of each pixel. We translated the output relative luminance values to 24-bit graphics codes, correcting for nonlinearities in the monitors' responses by means of measured lookup tables for each monitor. 
Procedure
We used a two-interval forced-choice procedure in which a test surface presented at viewing angle ϕ vtest and a match surface viewed from ϕ vmatch were presented sequentially. The observer's task was to indicate which patch appeared to be rougher. The match surface was always presented at a viewing angle different from the test. We note that the observer did not know which surface in any given trial was match and which was test. We use the distinction only in describing how the sequence of trials presented to the observers was affected by their judgments, which will be described next, as well as in analyzing the data as we describe below. 
Figure 8 illustrates the sequence of displays in a trial. A surface was displayed for 1,000 ms in the first interval of each trial. This was followed by an interstimulus interval (500 ms) containing a central black fixation point and four flanking dots on a gray surface at the same depth the surface patch would appear. The flanking dots were included to help observers maintain vergence. Another surface was presented in the second interval for 1,000 ms, and the observers then made a response by pressing either the right or the left mouse button to signal their choice of the interval containing the patch that was perceived to be rougher. The next trial was presented immediately after a response was made. The test stimulus was randomly assigned to the first or second interval, and the match was displayed in the other. 
Figure 8
 
Trial sequence. The observer performed a two-interval forced-choice task in which a test surface appeared in one interval and a match surface appeared in the other interval. When no stimulus was displayed, observers viewed a central fixation point flanked by four dots positioned on a surface displayed at the same disparity as the stimulus surface so that observers could maintain vergence at the appropriate distance. Observers indicated which patch appeared rougher. The next trial was presented after a response was made. Observers were permitted as much time as necessary to make a response.
Figure 8
 
Trial sequence. The observer performed a two-interval forced-choice task in which a test surface appeared in one interval and a match surface appeared in the other interval. When no stimulus was displayed, observers viewed a central fixation point flanked by four dots positioned on a surface displayed at the same disparity as the stimulus surface so that observers could maintain vergence at the appropriate distance. Observers indicated which patch appeared rougher. The next trial was presented after a response was made. Observers were permitted as much time as necessary to make a response.
For each test surface, that is, values of ϕ vtest and r test, where the roughness level of the surface (r test) was chosen from the intermediate range of roughness levels (0.25 ≤ r test ≤ 3.06; see Figure 6), two interleaved staircases (2-up/1-down and 1-up/2-down) varied the roughness r match of the match stimulus (with viewpoint position ϕ vmatch and r match chosen from the entire range of roughness levels in the stimulus set) to determine the point of subjective equality (PSE). The PSE is the point at which the match surface was perceived rougher than the test 50% of the time. We tested viewing angles that were spaced sufficiently far apart to be easily discriminated from each other. Left and right viewing angles were supplementary to each other (angles reflected about surface the normal) to account for any left/right viewing biases. Comparisons of viewing angles ϕ v ≤ 90° and ϕ v ≥ 90° were separated into two sets. In each set, there were four values of ϕ v, resulting in six possible pairs of test and match viewpoint. For each such pair, the viewpoint closer or equal to 90° was the test and the other member of the pair was the match viewpoint. This resulted in a total of 72 test staircases (6 test roughness levels × 6 viewing angle comparisons × 2 staircase types) for each of the two sets of viewing angle comparisons. Staircases for one set of viewing angle comparisons were continued across a total of four sessions. For one of the sets, a control condition was interleaved throughout the four sessions wherein three punctate illuminant elevations (50°, 60°, and 70°) were compared with each other for a fixed viewpoint in the frontoparallel position, resulting in an additional 36 staircases (6 test roughness levels × 3 illuminant comparisons × 2 staircase types; for details, see Ho et al., 2006). Each observer completed a total of eight sessions (four sessions per viewing angle comparison set) with 20 trials per staircase. 
Observers
Four observers participated in this study. All observers, except for YXH (one of the authors), were unaware of the purpose of the experiment. All observers had normal or corrected-to-normal vision. 
Results
We estimated points of subjectively equal roughness (PSEs) by fitting the data with a Weibull distribution and estimating the point at which there was a 50% probability of choosing the match surface as rougher. Results for all observers are shown in Figure 9. Each column represents one observer's data for each viewpoint comparison. Each row contains a group of three comparisons labeled as follows: Group I, ϕ v = 90° compared with oblique viewing angles ϕ v < 90°; Group II, oblique viewing angles ϕ v < 90° compared with oblique viewing angles ϕ v < 90°; Group III, ϕ v = 90° compared with oblique viewing angles ϕ v > 90°; and Group IV, oblique viewing angles ϕ v > 90° compared with oblique viewing angles ϕ v > 90°. The 95% confidence intervals for each PSE were obtained by a bootstrap method (Efron & Tibshirani, 1993) whereby each human observer's performance in the corresponding condition was simulated 1,000 times and the 5th and 95th percentiles were calculated. 
Figure 9
 
Results. PSEs are plotted for each viewpoint test-match comparison for each of the four observers (columns) and groups of comparisons (rows). Groups I and III consist of all comparisons between the test surface viewed frontoparallel and all match surfaces viewed from ϕ v < 90° and ϕ v > 90°, respectively. Groups II and IV consist of all comparisons of test and match surfaces viewed from viewpoints ϕ v < 90° and ϕ v > 90°, respectively. The dashed line represents the results expected of a roughness-constant observer. The solid lines are linear fits using the roughness discrimination model. Of the 48 slopes derived from the model, 33 were significantly different from 1 (at the Bonferroni-corrected α level of .0125 per test). Error bars on the PSEs represent 95% confidence intervals estimated by a bootstrap method (Efron & Tibshirani, 1993).
Figure 9
 
Results. PSEs are plotted for each viewpoint test-match comparison for each of the four observers (columns) and groups of comparisons (rows). Groups I and III consist of all comparisons between the test surface viewed frontoparallel and all match surfaces viewed from ϕ v < 90° and ϕ v > 90°, respectively. Groups II and IV consist of all comparisons of test and match surfaces viewed from viewpoints ϕ v < 90° and ϕ v > 90°, respectively. The dashed line represents the results expected of a roughness-constant observer. The solid lines are linear fits using the roughness discrimination model. Of the 48 slopes derived from the model, 33 were significantly different from 1 (at the Bonferroni-corrected α level of .0125 per test). Error bars on the PSEs represent 95% confidence intervals estimated by a bootstrap method (Efron & Tibshirani, 1993).
If observers were roughness constant across changes in viewing angle, then we would have expected their true PSEs to lie along the line of roughness constancy (i.e., the identity line) and the measured PSEs should show no patterned deviation from the line. However, this was not the case for comparisons between different illuminant positions in the control experiment and certain comparisons between viewpoints. 
We found that observers were generally roughness constant in comparisons between oblique viewing angles greater than 90°. However, roughness constancy failed in most other comparisons. In general, surfaces viewed obliquely with ϕ v > 90° were perceived to be rougher than the same surface viewed frontoparallel (ϕ v = 90°), and the opposite pattern was observed for surfaces with ϕ v ≤ 90°. In other words, surfaces generally appeared rougher with an increase in the amount of visible cast shadow. Results from our control experiment in which viewpoint was fixed and illuminant angle was varied were consistent with the latter observation, suggesting that observers in this study were susceptible to the same roughness judgment biases as observers in our previous study (see Figure S1 and Supplementary Tables S1 and S2 in the online supplement). 
A model of roughness discrimination
For many conditions, PSEs fell significantly above or below the line of roughness constancy. To determine whether there was a clear pattern in these failures of roughness constancy, we fit a model we call the roughness transfer function to the data. In this model, for each viewpoint, perceived roughness was assumed to be proportional to physical roughness perturbed by Gaussian noise. Suppose that the observer compares two surfaces; one surface has roughness level r a and is viewed from viewpoint A (first interval) and the other surface has roughness level r b and is viewed from viewpoint B (second interval). We assume that the observer's roughness estimate is a transformation of actual roughness that depends on viewpoint, 
ρ a A = V A ( r a ) ρ b B = V B ( r b ) .
(2)
On each trial, these estimates are perturbed by normally distributed error with zero mean, 
R a A = ρ a A + ε a A R b B = ρ b B + ε b B .
(3)
We allow for the possibility that the variance of the error depends on the magnitude of perceived roughness, in a manner analogous to Weber's law. Because our choice of a roughness scale was arbitrary, we formulate a generalization of Weber's law. We assume that the standard deviation of the error is proportional to a power function of the perceived roughness level: 
ε a A N ( 0 , σ 2 ρ a A 2 γ ) .
(4)
Here, σ 2 is the variance when ρ aA = 1, and γ yields the power transformation. If γ is 1, then Weber's law holds for the arbitrary roughness scale that we use. If γ is 0, then the error is invariant with roughness level. 
We next assume that the observer forms a decision variable Δ on each trial to decide whether the rougher patch appeared in the first or second interval, 
Δ = R b B R a A = ρ b B ρ a A + ε ,
(5)
where ɛ is normal with a mean of 0 and a variance of σ 2(V B (r b )2y + V A (r a )2y ). The observer responds “second interval” if Δ > 0; otherwise, the observer responds “first interval.” 
We next assume that the roughness transformation functions are linear, 
V A ( r ) = c A r V B ( r ) = c B r .
(6)
 
We define the contour of indifference to be the (r a , r b ) pairs such that V B (r b ) = V A (r a ). These pairs are predicted to appear equally rough to the observer under the corresponding viewing conditions. We refer to this contour as the transfer function connecting the two viewing conditions A and B, 
r b = τ A , B ( r a ) = V B 1 V A ( r a ) = c A c B r a = c A , B r a ,
(7)
where c A, B is as defined above. Note that if c A, B = 1, the observer's judgments of roughness are unaffected by a change of viewpoint. That is, the observer is roughness constant, at least for this pair of viewpoints. We cannot directly observe V A (r) for any viewpoint condition A or estimate the constant c A in the form of V A (r) we have assumed. We can, however, estimate the transfer function parameter c A, B from our data (see the appendix in Ho et al., 2006, for details). 
If roughness constancy holds, then c A = c B for any two viewing angles A and B and the value of c A, B should equal 1. For each group of viewpoint comparisons (i.e., Groups I–IV), we fit a total of three roughness transfer parameters for each tested pair of viewpoints plus σ and γ for a total of 14 model parameters (4 groups × 3 roughness transfer parameters + 2 noise-scaling parameters). We estimated these parameters by maximum likelihood methods. 
Figure 10 shows a plot of the fit value of the estimated roughness transfer parameter
c ^
for each viewpoint comparison and observer. The average difference between the estimated values of
c ^
and 1 was as follows: Group I, −0.267 ± 0.150; Group II, −0.238 ± 0.188; Group III, 0.211 ± 0.131; and Group IV, 0.022 ± 0.126. The significance of the deviations of each estimate from 1 was estimated by using a bootstrap method to obtain a confidence interval around the value of
c ^
and by using a z test to test whether the value of 1 could be obtained from the bootstrapped distribution. Across conditions and observers, slightly more than two thirds of
c ^
values were significantly different from 1 (Table 1). Almost all observers failed to achieve roughness constancy in conditions in which a frontoparallel surface was compared with an oblique surface (Groups I and III). Examples of these failures of roughness constancy are shown in Figure 11. Observers also failed to achieve roughness constancy in comparisons between surfaces viewed from angles smaller than 90° (Group II). In Group IV, we found no patterned failures of roughness constancy (Table 1). Values of
c ^
were found to be very close to 1 for nearly all observers, indicating that noise increases proportionately with stimulus magnitude. 
Figure 10
 
Estimated slope parameter, c ^ . The value of c ^ for each viewpoint comparison is plotted for each observer. Comparisons are arranged in Groups I–IV (see Figure 9). The dashed line represents the results expected of a roughness-constant observer or γ ^ = 1. Error bars indicate bootstrapped 95% confidence intervals on the slopes. Notice that for comparisons of viewpoints ϕ v ≤ 90°, slopes tend to be greater than 1, whereas for comparisons of viewpoints ϕ v ≥ 90°, slopes tend to be less than and/or closer to 1.
Figure 10
 
Estimated slope parameter, c ^ . The value of c ^ for each viewpoint comparison is plotted for each observer. Comparisons are arranged in Groups I–IV (see Figure 9). The dashed line represents the results expected of a roughness-constant observer or γ ^ = 1. Error bars indicate bootstrapped 95% confidence intervals on the slopes. Notice that for comparisons of viewpoints ϕ v ≤ 90°, slopes tend to be greater than 1, whereas for comparisons of viewpoints ϕ v ≥ 90°, slopes tend to be less than and/or closer to 1.
Figure 11
 
Examples of failures of roughness constancy. (a) A surface of roughness r = 2.25 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 3.06 cm viewed from ϕ v = 50°. (b) A surface of roughness r = 3.06 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 2.25 cm viewed from the direction reflected about the surface normal (ϕ v = 130°). The roughness values for these surfaces were based on the performance of a typical observer (L.F.) in this study.
Figure 11
 
Examples of failures of roughness constancy. (a) A surface of roughness r = 2.25 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 3.06 cm viewed from ϕ v = 50°. (b) A surface of roughness r = 3.06 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 2.25 cm viewed from the direction reflected about the surface normal (ϕ v = 130°). The roughness values for these surfaces were based on the performance of a typical observer (L.F.) in this study.
Table 1
 
Estimated roughness discrimination model parameters. All 14 parameters were estimated from the roughness discrimination model for each observer. Transfer parameters are organized into four groups (I–IV; see text for details). Slope estimates ( c ^ ) that were found to be significantly different from 1 in a z test at the Bonferroni-corrected α level of .0125 are in boldface. Notice that values of γ ^ are very close to 1 for nearly all observers.
Table 1
 
Estimated roughness discrimination model parameters. All 14 parameters were estimated from the roughness discrimination model for each observer. Transfer parameters are organized into four groups (I–IV; see text for details). Slope estimates ( c ^ ) that were found to be significantly different from 1 in a z test at the Bonferroni-corrected α level of .0125 are in boldface. Notice that values of γ ^ are very close to 1 for nearly all observers.
Observer Group I Group II Group III Group IV σ ^ γ ^
σ ^ 90°, 70° c ^ 90°, 50° c ^ 90°, 30° c ^ 70°, 50° c ^ 50°, 30° c ^ 70°, 30° c ^ 90°, 110° c ^ 90°, 130° c ^ 90°, 150° c ^ 110°, 130° c ^ 130°, 150° c ^ 110°, 150°
JF 1.243 1.221 1.173 1.133 1.078 1.028 0.762 0.716 0.729 0.886 1.067 0.947 0.279 0.873
LF 1.491 1.401 1.373 1.675 1.160 1.438 0.706 0.656 0.662 0.786 1.105 0.803 0.689 0.938
SS 1.078 1.049 1.077 1.055 1.275 1.139 0.889 0.926 1.107 0.974 1.189 1.110 0.269 0.955
YXH 1.309 1.373 1.422 1.369 1.189 1.318 0.783 0.706 0.825 0.892 1.035 0.946 0.368 0.698
Discussion
A cue combination model
Our results suggest that roughness constancy is not always maintained across varying viewpoints. It appears that the orientation of the observer relative to the illuminant in the scene also matters. Although the illuminant position was fixed with respect to the scene, this was not enough to allow observers to maintain roughness constancy when viewpoint was changed. In our previous study (Ho et al., 2006), we found that failures of roughness constancy with changes in illumination direction could be predicted by a model in which observers used illuminant-variant pseudocues in their estimation of perceived roughness. Was the improper use of these same pseudocues also responsible for the failures of roughness constancy with changes in viewpoint? 
We used the same cue combination model developed in our previous study (Ho et al., 2006) to model the current data. Let R d denote the visual estimate of roughness based on all unbiased (i.e., viewpoint-invariant) cues to physical roughness. We define r d = E[R d] and assume that r d = r: The viewpoint-invariant cues (e.g., binocular disparity) signal, on average, the physical roughness of the surface. If the observers used only these viewpoint-invariant cues, then they would exhibit no systematic deviation from roughness constancy with changes in viewpoint, unlike our observers. 
In Ho et al. (2006), we described four pseudocues whose values vary not only with roughness, r, but also with illumination direction: (1) r p, the proportion of the image that is not directly lit by the punctate illuminant (the proportion of the image in shadow); (2) r m, the mean luminance of nonshadowed regions (nonzero pixels); (3) r s, the standard deviation of the luminance of nonshadowed regions (nonzero pixels) of the image due to differential illumination by the punctate illuminant; and (4) r c, the texture contrast. Texture contrast (Pont & Koenderink, 2005) is a modified version of Michelson contrast. It is computed as the difference between the 95th and 5th percentiles of luminance divided by the median luminance and is intended to be a robust statistic for characterizing materials across lighting conditions. We denote these measures as r p(r, ϕ v), r m(r, ϕ v), r s(r, ϕ v), and r c(r, ϕ v) to emphasize this dependence on both roughness r and viewpoint ϕ v
To estimate r p, r m, and r s, we needed to determine which pixels in each image were not directly illuminated by the punctate source. To do this, we employed a computational trick. We rerendered our scenes with the diffuse lighting term set to 0 and surface albedo set to 1 and no interreflections among facets. We refer to these rerendered images as punctate-only images. Pixels with a value of 0 in a punctate-only image corresponded to surfaces that were not directly illuminated by the punctate source (i.e., in shadow). The proportion of the image in shadow (r p) and the other terms based on nonshadowed regions (r m and r s) were easily computed once we knew which regions in the image were not directly illuminated by the punctate source. We determined this set of zero pixels using the left-eye images only. To distinguish our numerical estimates from the true underlying values, we write, for example,
r ¯ p
for the former, r p, for the latter, and so forth. 
As it turns out, these measures change systematically with viewpoint as well as with illuminant position. Figure 12 shows the values of the four pseudocues as a function of viewpoint for three different values of roughness. The data plotted are the means of the measures across 10 stimuli generated for each roughness level as seen from viewing angles in the range of 10° to 170°, with error bars indicating ±2 SD. Measures were obtained from trapezoidal regions located centrally on the 2D image projection of the surface from each viewpoint. For any fixed viewpoint, the pseudocue r p increases monotonically whereas r m decreases monotonically with increases in roughness. The other two pseudocues are nonmonotonic functions of roughness. All of these pseudocues are markedly affected by changes in viewpoint (monotonically for r p and r m but nonmonotonically for the other two pseudocues). The pseudocues evidently confound viewpoint and surface roughness. 
Figure 12
 
Image statistics. Ten random sets of surfaces were generated for each of three levels of roughness (r = 0.25, 1.56, and 4.00 cm) and were rendered from each of 13 viewpoints ϕ v = {10°, 20°, 30°, …, 160°, 170°}. Average measurements are plotted for the four pseudocues in the cue combination model. (a) Proportion of shadows, r p(r, ϕ v), (b) mean luminance of nonzero image pixels, r m(r, ϕ v), (c) standard deviation of nonzero image pixels, r s(r, ϕ v), and (d) texture contrast, r c(r, ϕ v).
Figure 12
 
Image statistics. Ten random sets of surfaces were generated for each of three levels of roughness (r = 0.25, 1.56, and 4.00 cm) and were rendered from each of 13 viewpoints ϕ v = {10°, 20°, 30°, …, 160°, 170°}. Average measurements are plotted for the four pseudocues in the cue combination model. (a) Proportion of shadows, r p(r, ϕ v), (b) mean luminance of nonzero image pixels, r m(r, ϕ v), (c) standard deviation of nonzero image pixels, r s(r, ϕ v), and (d) texture contrast, r c(r, ϕ v).
We assume that the visual system has error-perturbed estimates of each of these pseudocues to roughness, R p, R m, R s, and R c, corresponding to the four physical measures just defined. Each is an unbiased estimate of the corresponding physical measure: E[R p] = r p(r, ϕ v), and so forth. 
We wish to test whether the visual system errs in using these pseudocues across changes in viewpoint, resulting in the failures of roughness constancy observed in the data. We assume that cues and pseudocues are scaled and combined by a weighted average (Landy, Maloney, Johnston, & Young, 1995). In viewing a surface of roughness r from a given viewpoint ϕ v, the observer forms the roughness estimate 
R = w d R d + w p R p + w m R m + w s R s + w c R c ,
(8)
where the values w i combine the scale factors and weights and thus need not sum to 1 as weights usually do. In this study, observers compared this roughness estimate with that for a second surface patch with roughness r′ viewed from a different viewpoint, ϕ v′, 
R = w d R d + w p R p + w m R m + w s R s + w c R c ,
(9)
to decide which one was rougher. Consider the situation in which two surfaces were perceived to be equally rough; that is, R = R′. Subtracting Equations 8 and 9 yields 
0 = w d Δ R d + w p Δ R p + w m Δ R m + w s Δ R s + w c Δ R c ,
(10)
where ΔR i = R i ′ − R i . We assume that w d was nonzero; therefore, we can rearrange Equation 10 as 
Δ R d = a p Δ R p + a m Δ R m + a s Δ R s + a c Δ R c ,
(11)
where a p = −w p/w d, and so forth. We define Δr p = ER p] = r pr p′, Δr m = ER m] = r mr m′, and so forth. Equation 11 effectively expresses the effect of the pseudocues in terms of the viewpoint-invariant cues. If we take expected values of both sides of Equation 11, we have 
Δ r d = a p Δ r p + a m Δ r m + a s Δ r s + a c Δ r c .
(12)
This equation expresses the relationships between the systematic errors in viewpoint-invariant and viewpoint-dependent cues. If an observer were roughness constant across viewpoint, Δr d should be 0, as r d = r d′ = r, the physical roughness of the surface. Otherwise, Δr d is the observer's systematic error in matching surfaces in roughness across viewpoints: The systematic deviations from the identity line for each condition and observer in Figure 9 are estimates of Δr d for that observer and condition. Consequently, we can treat Equation 12 as a regression equation, 
Δ r ¯ d = a p Δ r ¯ p + a m Δ r ¯ m + a s Δ r ¯ s + a c Δ r ¯ c + ε ,
(13)
where
Δ r ¯ d
is the systematic error that the observer makes in a condition, taken from Figure 9, and the remaining values are the estimates of the pseudocue values that we computed for each stimulus condition, plotted in Figure 12. For each observer, we regressed all of the failures across conditions versus the pseudocue values for that condition to determine which pseudocues could account for the observer's systematic failures in roughness constancy across all comparisons. 
In the linear regression that we computed, we also included a constant term a 0, yielding the regression equation 
Δ r ¯ d = a 0 + a p Δ r ¯ p + a m Δ r ¯ m + a s Δ r ¯ s + a c Δ r ¯ c + ε .
(14)
We expect this term to be 0 because we assume that the observer will judge two perfectly smooth surfaces r = r′ = 0 to be equally rough even if they are at different orientations. By including the term in the regression, we can test whether it is 0 as expected. In using the variation in the cues from trial to trial to estimate the weight assigned to each cue, we were, in effect, applying the technique used by Ahumada and Lovell (1971) that is the basis of image classification methods. 
Although the values of
a ^ 0
were significantly different from 0 for some observers, they were small and not patterned across observers. We recomputed the regression, forcing
a ^ 0
to be 0 and report these values. 
We found that the combination of the four predictors of roughness judgments explained an average of 53% of the variance in the viewpoint variation data, ranging from 41% to 74%, depending on the observer (Table 2). Only coefficients a p and a m were found to be significantly different from 0. However, this does not necessarily indicate that the other two cues were not used in each pairwise roughness discrimination; it only shows that they had so little effect that we could not detect it. 
Table 2
 
Percentage of variance-accounted-for (VAF) for the full linear cue combination model for each observer and corresponding estimated cue coefficients. The four pseudocues include the following: proportion of shadows ( a ^ p ), mean luminance of nonshadowed regions ( a ^ m ), standard deviation of nonshadowed regions ( a ^ s ), and texture contrast ( a ^ c ). Values in boldface were found to be significantly different from 0 at the Bonferroni-corrected α level of .0125.
Table 2
 
Percentage of variance-accounted-for (VAF) for the full linear cue combination model for each observer and corresponding estimated cue coefficients. The four pseudocues include the following: proportion of shadows ( a ^ p ), mean luminance of nonshadowed regions ( a ^ m ), standard deviation of nonshadowed regions ( a ^ s ), and texture contrast ( a ^ c ). Values in boldface were found to be significantly different from 0 at the Bonferroni-corrected α level of .0125.
Observer VAF (%) a ^ p a ^ m a ^ s a ^ c
JF 50 2.903 0.073 0.004 1.759
LF 49 6.716 0.108 0.179 0.143
SS 41 −0.923 0.015 0.06 1.008
YXH 74 2.757 0.067 0.082 2.125
Figure 13 shows
Δ r ^ d
(estimates of failure of roughness constancy) plotted against the predicted values from the regression,
Δ r ^ d = a ^ p Δ r ¯ p + a ^ m Δ r ¯ m + a ^ s Δ r ¯ s + a ^ c Δ r ¯ c
using the regression estimates of the coefficients for the four pseudocues for each observer. Most values fall close to the identity line; however, some comparisons are closer than others. Particularly, for observer SS, all PSE differences fall close to zero, and not surprisingly, a relatively lower proportion of the variance in the data of SS is explained by the pseudocues, suggesting that SS may have used more reliable cues to roughness than the other observers did. 
Figure 13
 
Predicted values of the cue combination model. Estimated PSE differences Δ r ¯ d are plotted as a function of the predicted differences Δ r ^ d = a ^ p Δ r ¯ p + a ^ m Δ r ¯ m + a ^ s Δ r ¯ s + a ^ c Δ r ¯ c as determined by the linear cue combination model for each observer. Positive values indicate PSEs that fell below the line of roughness constancy. Results for comparisons between viewing angles closer to the illuminant (Groups I and II) are indicated in red, and results for those further away from the illuminant (Groups III and IV) are indicated in blue. PSEs are generally clustered near zero, suggesting that, for many comparisons, observers were fairly roughness constant across varying viewpoint. The clustering of Groups I and II below zero and Groups III and IV above zero is consistent with the measured failures of roughness constancy. The dashed line represents the ideal case in which all predicted values of PSE differences from the cue combination model are equal to the estimated values.
Figure 13
 
Predicted values of the cue combination model. Estimated PSE differences Δ r ¯ d are plotted as a function of the predicted differences Δ r ^ d = a ^ p Δ r ¯ p + a ^ m Δ r ¯ m + a ^ s Δ r ¯ s + a ^ c Δ r ¯ c as determined by the linear cue combination model for each observer. Positive values indicate PSEs that fell below the line of roughness constancy. Results for comparisons between viewing angles closer to the illuminant (Groups I and II) are indicated in red, and results for those further away from the illuminant (Groups III and IV) are indicated in blue. PSEs are generally clustered near zero, suggesting that, for many comparisons, observers were fairly roughness constant across varying viewpoint. The clustering of Groups I and II below zero and Groups III and IV above zero is consistent with the measured failures of roughness constancy. The dashed line represents the ideal case in which all predicted values of PSE differences from the cue combination model are equal to the estimated values.
The effects of shadow hiding
For ease of discussion, we have separated our data into groups of comparisons of viewing angles closer to the illuminant (Groups I and II) and further from the illuminant (Groups III and IV). Each individual comparison consisted of a pair of viewing angles, one closer in angle to frontoparallel and the other further from frontoparallel. If the observer's position with respect to the illuminant and surface did not matter, we should have seen no effect at all of changing viewpoint. If, instead, the determining factor was the observer's position with respect to frontoparallel (independent of illuminant position), we would have expected all roughness transfer parameters to be greater than 1 (indicating that surfaces viewed from positions near frontoparallel are perceived as rougher) or all less than 1 (indicating the opposite). For example, a surface viewed from 50° compared with that viewed from 70° should produce the same roughness judgments as a surface viewed from their supplementary angles 130° compared with 110°, respectively; one angle is just the reflection of the other across surface normal. This is not what we found. Rather, for Groups I and II, we found that almost all roughness transfer parameters were greater than 1, whereas in Groups III and IV, most were less than or at least closer to 1, suggesting that the viewing angle and illuminant direction interact in determining perceived roughness. Specifically, in some cases, viewing a surface more obliquely results in a surface appearing rougher, whereas in other cases, viewing a surface more obliquely results in the surface appearing less rough. 
The shadow-hiding opposition effect provides a possible explanation for the asymmetry that was observed in our data. Studies in atmospheric optics show that rough surfaces like those of the moon and fir tree bark exhibit a phenomenon known as the shadow-hiding opposition effect in which surface textural elements act as occluders to their own shadows when a surface is viewed from the same or nearly the same direction as the sun when it is located obliquely in the scene (Iaquinta & Lukianowicz, 2006). This results in the surface being brightest when most shadows are occluded. As the angle between the sun and the viewer increases, shadows become more and more visible (i.e., represented here as increasing viewpoint elevation). Viewpoint-induced changes consistent with shadow hiding are evident in our measurements of r p and r m with increasing viewing angle (Figures 12a and 12b). For example, two surfaces of the same roughness viewed from 30° and 150° contain very different proportions of shadows, although these two angles are identical with respect to the surface normal; specifically, the surface viewed from a larger angle of elevation contains more shadows. Correspondingly, our data show that when two surfaces viewed obliquely from these angles were compared with a surface of equal physical roughness viewed frontoparallel, for example, observers perceived the surface containing a greater proportion of visible shadows to be rougher (i.e., in the latter comparison, the oblique surface was perceived to be rougher, whereas in the former comparison, the oblique surface was perceived to be less rough). 
All of our data could be explained by shadow hiding. In Groups I, II, and III, the proportion of shadows r p increases with viewpoint angle and would lead to the failures of roughness constancy we observed. For the viewpoint angles used in Group IV, the same measure becomes noisier and less dependent on viewpoint (Figure 12a) and, hence, acts as a nearly viewpoint-invariant cue to roughness, consistent with the nearly roughness-constant results for Group IV. Although it was not the intent of this study to examine in detail the effects of shadow hiding on surface roughness judgments, our results are generally consistent with the idea that roughness judgments for obliquely illuminated surfaces depend on the visibility of shadows in the texture. 
Other possible pseudocues to roughness?
As we noted in Ho et al. (2006), we do not claim that the four cues we advance are precisely the cues that the visual system uses. Any invertible linear transformation of the four cues used here results in four alternative cues that would explain our results equally well, and there may be invertible nonlinear transformation that would lead to cues that better account for our data. Additionally, we do not claim that these cues or transformations thereof are the only cues that may be used by the visual system to make roughness judgments. 
The acquisition of pseudocues to roughness
We have suggested (Ho et al., 2006) that the visual system may learn cues to roughness by means of associative learning. That is, a new perceptual cue may be trained to elicit a reliable perceptual response if repeatedly paired with another cue that elicits that particular response. The effect of associative learning on perceptual appearance has been examined in a variety of studies (e.g., Adams, Graf, & Ernst, 2004; Haijiang, Saunders, Stone, & Backus, 2006; Jacobs & Fine, 1999; Sinha & Poggio, 1996; Wallach & Austin, 1954). One particular type of associative learning referred to as cue recruitment was recently described by Haijiang et al. (2006) who trained observers to disambiguate a perceptually bistable display (i.e., a rotating Necker cube) by pairing depth cues (e.g., occlusion) with an otherwise new cue. When depth cues were removed from the scene after training, the new cue presented alone with the Necker cube stimulus caused trainees to perceive rotation in the previously signaled direction. 
Thus, it is possible that failures in roughness constancy may be a result of having associated pseudocues such as the proportion of shadows with the perception of roughness. Indeed, for a surface illuminated obliquely, the rougher the surface is, the more shadows the surface contains (Figure 12a). The pseudocues are valid cues to roughness when viewpoint and direction of illumination are held constant. The stimulus design that we have chosen provides a particularly compelling example of a case in which roughness judgments could have been made solely based on depth information clearly available for these mesoscale textures. However, observers failed to use this information and, instead, opted to use image cues that confound physical roughness with changes in viewpoint, that is, pseudocues. The systematic errors in roughness judgments observed here for surfaces that vary only in depth information show that pseudocues play a critical role in roughness judgments. 
Judgments of roughness may be based on haptic and visual input, and it is easy to imagine that the ideal conditions for calibrating visual and haptic cues to roughness would be when a number of surfaces, varying in roughness, are presented simultaneously within arm's reach. If visual cues to roughness are “recruited” through active exploration of the surfaces under the same lighting and from the same viewpoint, then it is not surprising that we recruit cues that will not be all that useful in purely visual comparisons across changes in viewpoint or lighting. Indeed, the calibration of visual cues with cues from other modalities when interacting with such surfaces under a variety of illuminant and viewing conditions may be critical to maintaining visual constancy across changes in our visual environment. 
Supplementary Materials
We ran a control experiment in this study with the same experimental design as Ho, Landy, & Maloney (2006): we varied the angle between the punctate light source and the surface, but the viewer's viewpoint was always fronto-parallel. The results shown in the following tables and figure demonstrate that observers in this study exhibited the same roughness judgment biases as observers in our previous study when only illuminant position was varied. The effect of varying illuminant position on perceived roughness is strong; furthermore, any effect found in our current study is most likely not unique to our sample of observers. 
Table S1.
Table S1.
Observerĉ70, 60ĉĉImage not availableImage not available
JF0.1800.981
LF0.8710.6600.932
SS0.4640.777
YXH0.2990.704
 
Table S1. All 5 parameters estimated from the roughness discrimination model for each observer (control experiment). Sub-scripted values for slope estimates () indicate the pair of illuminant elevations compared. Values of that were found to be significantly different from one in a z-test at the Bonferroni-corrected alpha level 0.0125 are boldfaced. Notice that values of are close to one for all observers. 
Table S2.
Table S2.
ObserverVAFImage not availablepImage not availablemImage not availablesImage not availablec
JF716.9040.011−0.34114.231
LF62−18.750.053−0.982
SS56−2.2720.028−0.0887.082
YXH898.2040.024 0.221
 
Table S2. Percentage of variance-accounted-for (VAF) for the full linear cue combination model and corresponding estimated cue coefficients (control experiment). Boldface values were found to be significantly different from zero at the Bonferroni-corrected alpha level 0.0125. 
Supplementary Figure 1 - Supplementary Figure 1 
Figure S1. Results from the control experiment. PSEs are plotted for each viewpoint test-match comparison for each of the four observers. The dashed line represents the results expected of a roughness-constant observer. The solid lines are linear fits using the roughness discrimination model for each comparison between a test and match illuminant position, and , respectively. Notice that almost all PSEs fall below the line of roughness constancy. All slope estimates derived from the roughness discrimination model, except for one (LF, in comparison vs. ), were significantly different from one (at the Bonferroni-corrected α level of 0.0125 per test). Error bars on the PSEs represent 95% confidence intervals estimated by a bootstrap method (Efron & Tibshirani, 1993). 
Acknowledgments
This research was supported by National Institutes of Health Grants EY16165 and EY08266. We thank Hüseyin Boyaci and Katja Doerschner for their help in developing the software used in the experiments described here, which is based on a code written by them, and for many helpful comments and suggestions. 
Commercial relationships: none. 
Corresponding author: Yun-Xian Ho. 
Address: Department of Psychology, New York University, 6 Washington Place, 8th Floor, New York, NY 10003, USA. 
References
Adams W. J. Graf E. W. Ernst M. O. (2004). Experience can change the ‘light-from-above’ prior. Nature Neuroscience, 7, 1057–1058. [PubMed] [Article] [CrossRef] [PubMed]
Ahumada, Jr. A. J. Lovell J. (1971). Stimulus features in signal detection. Journal of the Acoustical Society of America, 49, 1751–1756. [CrossRef]
Belhumeur P. N. Kriegman D. J. Yuille A. L. (1999). The bas-relief ambiguity. International Journal of Computer Vision, 35, 33–44. [CrossRef]
Boyaci H. Maloney L. T. Hersh S. (2003). The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. Journal of Vision, 3, (8), 541–553, http://journalofvision.org/3/8/2/, doi:10.1167/3.8.2. [PubMed] [Article] [CrossRef] [PubMed]
Chantler M. J. (1995). Why illuminant direction is fundamental to texture analysis. IEE Proceedings. Vision, Image and Signal Processing, 142, 199–206. [CrossRef]
Chantler M. J. McGunnigle G. (1995). Compensation of illumination tilt variation for texture classification. The 5th International Conference on Image Processing and its Applications, 767–771).
Cula O. G. Dana K.J. (2001). Recognition methods for 3d textured surfaces. Proceedings of SPIE, Conference on Human Vision and Electronic Imaging VI, 4299, 209–220).
Dana K. J. van Ginneken B. Nayar S. K. Koenderink J. J. (1999). Reflectance and texture of real world surfaces. ACM Transactions on Graphics, 18, 1–34. [CrossRef]
Drbohlav O. Chandler M. J. (2005). Illumination-invariant texture classification using single training images. Texture 2005: Proceedings of the 4th International Workshop on Texture Analysis and Synthesis, 31–36).
Efron B. Tibshirani R. J. (1993). An introduction to the bootstrap. London, UK: Chapman & Hall.
Fleming R. W. Dror R. O. Adelson E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3, (5), 347–368, http://journalofvision.org/3/5/3/, doi:10.1167/3.5.3. [PubMed] [Article] [CrossRef] [PubMed]
Geusebroek J. Burghouts G. J. Smeulders A. W. M. (2005). The Amsterdam Library of Object Images. International Journal of Computer Vision, 61, 103–112. [CrossRef]
Haijiang Q. Saunders J. A. Stone R. W. Backus B. T. (2006). Demonstration of cue recruitment: Change in visual appearance by means of Pavlovian conditioning. Proceedings of the National Academy of Sciences of the United States of America, 103, 483–488. [PubMed] [Article] [CrossRef] [PubMed]
Ho Y. Landy M. S. Maloney L. T. (2006). How direction of illumination affects visually perceived surface roughness. Journal of Vision, 6, (5), 634–648, http://journalofvision.org/6/5/8/, doi:10.1167/6.5.8. [PubMed] [Article] [CrossRef] [PubMed]
Iaquinta J. Lukianowicz C. (2006). Inferring texture from shadows. Measurement Science Review, 6, 57–61. [Article]
Jacobs R. A. Fine I. (1999). Experience-dependent integration of texture and motion cues to depth. Vision Research, 39, 4062–4075. [PubMed] [CrossRef] [PubMed]
Knill D. C. (1990). Estimating illuminant direction and degree of surface relief. Journal of the Optical Society of America A, Optics and Image Science, 7, 759–775. [PubMed] [CrossRef] [PubMed]
Landy M. S. Maloney L. T. Johnston E. B. Young M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. [PubMed] [CrossRef] [PubMed]
Larson G. W. Shakespeare R. (1996). Rendering with radiance; the art and science of lighting and visualization. San Francisco: Morgan Kaufmann Publishers, Inc.
Leung T. Malik J. (2001). Representing and recognizing the visual appearance of materials using three-dimensional textons. International Journal of Computer Vision, 43, 29–44. [CrossRef]
Nishida S. Shinya M. (1998). Use of image-based information in judgments of surface-reflectance properties. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 15, 2951–2965. [PubMed] [CrossRef] [PubMed]
Obein G. Knoblauch K. Viénot F. (2004). Difference scaling of gloss: Nonlinearity, binocularity and constancy. Journal of Vision, 4, (9), 711–720, http://journalofvision.org/4/9/4/, doi:10.1167/4.9.4. [PubMed] [Article] [CrossRef] [PubMed]
Pellacini F. Ferwerda J. A. Greenberg D. P. (2000). Toward a psychophysically-based light reflection model for image synthesis. Proceedings of SIGGRAPH '00, 55–64),
Penirschke A. Chantler M. J. Petrou M. (2002). Texture 2002, The 2nd International Workshop on Texture Analysis and Synthesis,.
Pont S. C. Koenderink J. J. (2005). Bidirectional texture contrast function. International Journal of Computer Vision, 62, 17–34. [CrossRef]
Scheifler R. W. Gettys J. (1996). X window system: Core library and standards. Boston: Digital Press.
Schmid C. (2001). Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR'01), 2, 39.
Sinha P. Poggio T. (1996). Role of learning in three-dimensional form perception. Nature, 384, 460–463. [PubMed] [CrossRef] [PubMed]
Varma M. Zisserman A. (2002). Classifying images of materials: Achieving viewpoint and illumination independencen Lecture notes in computer science 2352: ECCV 2002: Proceedings, Part III (pp. 255–271). Berlin/Heidelberg: Springer-Verlag.
Wallach H. Austin P. (1954). Recognition and the localization of visual traces. American Journal of Psychology, 67, 338–340. [PubMed] [CrossRef] [PubMed]
Ward G. J. (1994). The RADIANCE lighting simulation and rendering system. Computer Graphics, 28, 459–472.
Figure 1
 
A lemon photographed under three different illumination conditions (a) and from three different viewing angles (b). Images are from the Amsterdam Library of Images (Geusebroek, Burghouts, & Smeulders, 2005).
Figure 1
 
A lemon photographed under three different illumination conditions (a) and from three different viewing angles (b). Images are from the Amsterdam Library of Images (Geusebroek, Burghouts, & Smeulders, 2005).
Figure 2
 
Coordinate systems. A Cartesian coordinate system was used to define surface patches. The origin of the coordinate system was the center of the surface patch. The z-axis was normal to the stimulus plane. The x-axis was horizontal, and the y-axis was vertical. For the positions of the observer and light source, spherical coordinates (ψ, ϕ, d) were used (where ψ was azimuth, ϕ was elevation, and d was distance in centimeters). Because observers and light sources always lie on the horizontal midline, ψ was fixed at 180°, and ϕ ranged from 0° (the negative x-axis) to 180° (the positive x-axis). The observer was at location (ψ v, ϕ v, 70), and the light source was at location (ψ p, ϕ p, 80).
Figure 2
 
Coordinate systems. A Cartesian coordinate system was used to define surface patches. The origin of the coordinate system was the center of the surface patch. The z-axis was normal to the stimulus plane. The x-axis was horizontal, and the y-axis was vertical. For the positions of the observer and light source, spherical coordinates (ψ, ϕ, d) were used (where ψ was azimuth, ϕ was elevation, and d was distance in centimeters). Because observers and light sources always lie on the horizontal midline, ψ was fixed at 180°, and ϕ ranged from 0° (the negative x-axis) to 180° (the positive x-axis). The observer was at location (ψ v, ϕ v, 70), and the light source was at location (ψ p, ϕ p, 80).
Figure 3
 
Rough-surface model. (a) A 20 × 20 cm regular rectangular grid of 20 × 20 base points was constructed. (b) The base points of the grid were jittered randomly in the xy-plane. (c) Uniform random deviations were added to the base points in the z direction. (d) The surface that was composed of 3D jittered base points was then divided into triangular facets by splitting each 2 × 2 set of base points along a randomly chosen diagonal.
Figure 3
 
Rough-surface model. (a) A 20 × 20 cm regular rectangular grid of 20 × 20 base points was constructed. (b) The base points of the grid were jittered randomly in the xy-plane. (c) Uniform random deviations were added to the base points in the z direction. (d) The surface that was composed of 3D jittered base points was then divided into triangular facets by splitting each 2 × 2 set of base points along a randomly chosen diagonal.
Figure 4
 
Viewpoint positions. Seven viewpoints were tested in this study (ϕ v = 30°, 50°, 70°, 90°, 110°, 130°, and 150°). The punctate illuminant was always fixed at (180°, 60°, 80 cm), whereas viewpoint was varied.
Figure 4
 
Viewpoint positions. Seven viewpoints were tested in this study (ϕ v = 30°, 50°, 70°, 90°, 110°, 130°, and 150°). The punctate illuminant was always fixed at (180°, 60°, 80 cm), whereas viewpoint was varied.
Figure 5
 
Stimulus stereo pairs. Three examples of stimuli with roughness level r = 1.56 cm viewed (a) frontoparallel, (b) from ϕ v = 50°, and (c) from ϕ v = 130°. For all viewpoints shown here, the punctate illuminant elevation was fixed at ϕ p = 60°.
Figure 5
 
Stimulus stereo pairs. Three examples of stimuli with roughness level r = 1.56 cm viewed (a) frontoparallel, (b) from ϕ v = 50°, and (c) from ϕ v = 130°. For all viewpoints shown here, the punctate illuminant elevation was fixed at ϕ p = 60°.
Figure 6
 
Sample stimuli. For each roughness level, four surfaces were randomly generated. Here, one of the four is shown for each roughness level r viewed from each of seven viewpoints.
Figure 6
 
Sample stimuli. For each roughness level, four surfaces were randomly generated. Here, one of the four is shown for each roughness level r viewed from each of seven viewpoints.
Figure 7
 
Experimental apparatus. Stimuli were displayed using a computer-controlled stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors. Observers viewed them via small mirrors placed in front of their eyes. In the fused image, the surface patch appeared approximately 70 cm in front of the observer. The distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation and other distance cues.
Figure 7
 
Experimental apparatus. Stimuli were displayed using a computer-controlled stereoscope. The left and right images of a stereo pair were displayed on the left and right monitors. Observers viewed them via small mirrors placed in front of their eyes. In the fused image, the surface patch appeared approximately 70 cm in front of the observer. The distance was also the optical distance to the screens of the two computer monitors, minimizing any mismatch between accommodation and other distance cues.
Figure 8
 
Trial sequence. The observer performed a two-interval forced-choice task in which a test surface appeared in one interval and a match surface appeared in the other interval. When no stimulus was displayed, observers viewed a central fixation point flanked by four dots positioned on a surface displayed at the same disparity as the stimulus surface so that observers could maintain vergence at the appropriate distance. Observers indicated which patch appeared rougher. The next trial was presented after a response was made. Observers were permitted as much time as necessary to make a response.
Figure 8
 
Trial sequence. The observer performed a two-interval forced-choice task in which a test surface appeared in one interval and a match surface appeared in the other interval. When no stimulus was displayed, observers viewed a central fixation point flanked by four dots positioned on a surface displayed at the same disparity as the stimulus surface so that observers could maintain vergence at the appropriate distance. Observers indicated which patch appeared rougher. The next trial was presented after a response was made. Observers were permitted as much time as necessary to make a response.
Figure 9
 
Results. PSEs are plotted for each viewpoint test-match comparison for each of the four observers (columns) and groups of comparisons (rows). Groups I and III consist of all comparisons between the test surface viewed frontoparallel and all match surfaces viewed from ϕ v < 90° and ϕ v > 90°, respectively. Groups II and IV consist of all comparisons of test and match surfaces viewed from viewpoints ϕ v < 90° and ϕ v > 90°, respectively. The dashed line represents the results expected of a roughness-constant observer. The solid lines are linear fits using the roughness discrimination model. Of the 48 slopes derived from the model, 33 were significantly different from 1 (at the Bonferroni-corrected α level of .0125 per test). Error bars on the PSEs represent 95% confidence intervals estimated by a bootstrap method (Efron & Tibshirani, 1993).
Figure 9
 
Results. PSEs are plotted for each viewpoint test-match comparison for each of the four observers (columns) and groups of comparisons (rows). Groups I and III consist of all comparisons between the test surface viewed frontoparallel and all match surfaces viewed from ϕ v < 90° and ϕ v > 90°, respectively. Groups II and IV consist of all comparisons of test and match surfaces viewed from viewpoints ϕ v < 90° and ϕ v > 90°, respectively. The dashed line represents the results expected of a roughness-constant observer. The solid lines are linear fits using the roughness discrimination model. Of the 48 slopes derived from the model, 33 were significantly different from 1 (at the Bonferroni-corrected α level of .0125 per test). Error bars on the PSEs represent 95% confidence intervals estimated by a bootstrap method (Efron & Tibshirani, 1993).
Figure 10
 
Estimated slope parameter, c ^ . The value of c ^ for each viewpoint comparison is plotted for each observer. Comparisons are arranged in Groups I–IV (see Figure 9). The dashed line represents the results expected of a roughness-constant observer or γ ^ = 1. Error bars indicate bootstrapped 95% confidence intervals on the slopes. Notice that for comparisons of viewpoints ϕ v ≤ 90°, slopes tend to be greater than 1, whereas for comparisons of viewpoints ϕ v ≥ 90°, slopes tend to be less than and/or closer to 1.
Figure 10
 
Estimated slope parameter, c ^ . The value of c ^ for each viewpoint comparison is plotted for each observer. Comparisons are arranged in Groups I–IV (see Figure 9). The dashed line represents the results expected of a roughness-constant observer or γ ^ = 1. Error bars indicate bootstrapped 95% confidence intervals on the slopes. Notice that for comparisons of viewpoints ϕ v ≤ 90°, slopes tend to be greater than 1, whereas for comparisons of viewpoints ϕ v ≥ 90°, slopes tend to be less than and/or closer to 1.
Figure 11
 
Examples of failures of roughness constancy. (a) A surface of roughness r = 2.25 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 3.06 cm viewed from ϕ v = 50°. (b) A surface of roughness r = 3.06 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 2.25 cm viewed from the direction reflected about the surface normal (ϕ v = 130°). The roughness values for these surfaces were based on the performance of a typical observer (L.F.) in this study.
Figure 11
 
Examples of failures of roughness constancy. (a) A surface of roughness r = 2.25 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 3.06 cm viewed from ϕ v = 50°. (b) A surface of roughness r = 3.06 cm viewed frontoparallel was perceived to be equally rough as a surface of roughness r = 2.25 cm viewed from the direction reflected about the surface normal (ϕ v = 130°). The roughness values for these surfaces were based on the performance of a typical observer (L.F.) in this study.
Figure 12
 
Image statistics. Ten random sets of surfaces were generated for each of three levels of roughness (r = 0.25, 1.56, and 4.00 cm) and were rendered from each of 13 viewpoints ϕ v = {10°, 20°, 30°, …, 160°, 170°}. Average measurements are plotted for the four pseudocues in the cue combination model. (a) Proportion of shadows, r p(r, ϕ v), (b) mean luminance of nonzero image pixels, r m(r, ϕ v), (c) standard deviation of nonzero image pixels, r s(r, ϕ v), and (d) texture contrast, r c(r, ϕ v).
Figure 12
 
Image statistics. Ten random sets of surfaces were generated for each of three levels of roughness (r = 0.25, 1.56, and 4.00 cm) and were rendered from each of 13 viewpoints ϕ v = {10°, 20°, 30°, …, 160°, 170°}. Average measurements are plotted for the four pseudocues in the cue combination model. (a) Proportion of shadows, r p(r, ϕ v), (b) mean luminance of nonzero image pixels, r m(r, ϕ v), (c) standard deviation of nonzero image pixels, r s(r, ϕ v), and (d) texture contrast, r c(r, ϕ v).
Figure 13
 
Predicted values of the cue combination model. Estimated PSE differences Δ r ¯ d are plotted as a function of the predicted differences Δ r ^ d = a ^ p Δ r ¯ p + a ^ m Δ r ¯ m + a ^ s Δ r ¯ s + a ^ c Δ r ¯ c as determined by the linear cue combination model for each observer. Positive values indicate PSEs that fell below the line of roughness constancy. Results for comparisons between viewing angles closer to the illuminant (Groups I and II) are indicated in red, and results for those further away from the illuminant (Groups III and IV) are indicated in blue. PSEs are generally clustered near zero, suggesting that, for many comparisons, observers were fairly roughness constant across varying viewpoint. The clustering of Groups I and II below zero and Groups III and IV above zero is consistent with the measured failures of roughness constancy. The dashed line represents the ideal case in which all predicted values of PSE differences from the cue combination model are equal to the estimated values.
Figure 13
 
Predicted values of the cue combination model. Estimated PSE differences Δ r ¯ d are plotted as a function of the predicted differences Δ r ^ d = a ^ p Δ r ¯ p + a ^ m Δ r ¯ m + a ^ s Δ r ¯ s + a ^ c Δ r ¯ c as determined by the linear cue combination model for each observer. Positive values indicate PSEs that fell below the line of roughness constancy. Results for comparisons between viewing angles closer to the illuminant (Groups I and II) are indicated in red, and results for those further away from the illuminant (Groups III and IV) are indicated in blue. PSEs are generally clustered near zero, suggesting that, for many comparisons, observers were fairly roughness constant across varying viewpoint. The clustering of Groups I and II below zero and Groups III and IV above zero is consistent with the measured failures of roughness constancy. The dashed line represents the ideal case in which all predicted values of PSE differences from the cue combination model are equal to the estimated values.
Table 1
 
Estimated roughness discrimination model parameters. All 14 parameters were estimated from the roughness discrimination model for each observer. Transfer parameters are organized into four groups (I–IV; see text for details). Slope estimates ( c ^ ) that were found to be significantly different from 1 in a z test at the Bonferroni-corrected α level of .0125 are in boldface. Notice that values of γ ^ are very close to 1 for nearly all observers.
Table 1
 
Estimated roughness discrimination model parameters. All 14 parameters were estimated from the roughness discrimination model for each observer. Transfer parameters are organized into four groups (I–IV; see text for details). Slope estimates ( c ^ ) that were found to be significantly different from 1 in a z test at the Bonferroni-corrected α level of .0125 are in boldface. Notice that values of γ ^ are very close to 1 for nearly all observers.
Observer Group I Group II Group III Group IV σ ^ γ ^
σ ^ 90°, 70° c ^ 90°, 50° c ^ 90°, 30° c ^ 70°, 50° c ^ 50°, 30° c ^ 70°, 30° c ^ 90°, 110° c ^ 90°, 130° c ^ 90°, 150° c ^ 110°, 130° c ^ 130°, 150° c ^ 110°, 150°
JF 1.243 1.221 1.173 1.133 1.078 1.028 0.762 0.716 0.729 0.886 1.067 0.947 0.279 0.873
LF 1.491 1.401 1.373 1.675 1.160 1.438 0.706 0.656 0.662 0.786 1.105 0.803 0.689 0.938
SS 1.078 1.049 1.077 1.055 1.275 1.139 0.889 0.926 1.107 0.974 1.189 1.110 0.269 0.955
YXH 1.309 1.373 1.422 1.369 1.189 1.318 0.783 0.706 0.825 0.892 1.035 0.946 0.368 0.698
Table 2
 
Percentage of variance-accounted-for (VAF) for the full linear cue combination model for each observer and corresponding estimated cue coefficients. The four pseudocues include the following: proportion of shadows ( a ^ p ), mean luminance of nonshadowed regions ( a ^ m ), standard deviation of nonshadowed regions ( a ^ s ), and texture contrast ( a ^ c ). Values in boldface were found to be significantly different from 0 at the Bonferroni-corrected α level of .0125.
Table 2
 
Percentage of variance-accounted-for (VAF) for the full linear cue combination model for each observer and corresponding estimated cue coefficients. The four pseudocues include the following: proportion of shadows ( a ^ p ), mean luminance of nonshadowed regions ( a ^ m ), standard deviation of nonshadowed regions ( a ^ s ), and texture contrast ( a ^ c ). Values in boldface were found to be significantly different from 0 at the Bonferroni-corrected α level of .0125.
Observer VAF (%) a ^ p a ^ m a ^ s a ^ c
JF 50 2.903 0.073 0.004 1.759
LF 49 6.716 0.108 0.179 0.143
SS 41 −0.923 0.015 0.06 1.008
YXH 74 2.757 0.067 0.082 2.125
Supplementary Figure 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×