Free
Article  |   June 2014
Texture-shading flow interactions and perceived reflectance
Author Affiliations
Journal of Vision June 2014, Vol.14, 1. doi:10.1167/14.7.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Juno Kim, Phillip J. Marlow, Barton L. Anderson; Texture-shading flow interactions and perceived reflectance. Journal of Vision 2014;14(7):1. doi: 10.1167/14.7.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The appearance of surface texture depends on the identification of edge contours in an image generated by local variations in reflectance. These edges in the image need to be distinguished from diffuse shading gradients caused by the interaction of light with surface relief. To understand how the brain performs this separation, we generated textures with orientation flows that were initially congruent with the diffuse shading flow of planar surfaces. We found that rotating textures relative to shading increased the appearance of surface pigmentation, which was well explained by an increase in the variation of local orientation fields with increasing offset of texture gradients (Experiment 1). We obtained similar findings when rotating texture flow relative to the diffuse shading of spherical surfaces with global curvature (Experiment 2). In a second set of experiments, we found that perceived pigmentation of spherical surfaces depended on the perceived orientation of the light field; rotating images of spherical surfaces reduced both perceived pigmentation (Experiment 3) and perceived global texture contrast in an objective task (Experiment 4). The dependence of perceived texture on image orientation suggests that the separation of texture flow from shading depends on an assumed light source from above bias. These findings support the view that separation of texture flow from shading, and thus perceived pigmentation, depend not only on the local structure of orientation fields in an image, but also on midlevel representations of shading and illuminance flow.

Introduction
We visually perceive objects with surface properties of shape, color, texture, and gloss. Although a surface's appearance is determined by the light that it reflects, the different contributions of physical properties to image structure are conflated in images. A major challenge in vision science is to understand how the visual system separates image luminance caused by variations in reflectance (i.e., changes in albedo) from surface shading generated by interactions between 3-D shape and illumination. Previous research has examined the role of surface shading, texture, and specularity in the perception of surface shape (Fleming, Dror, & Adelson, 2003; Fleming, Torralba, & Adelson, 2004; Todd, Thaler, Dijkstra, Koenderink, & Kappers, 2007; van Doorn, Koenderink, & Wagemans, 2011), but little research has attempted to understand how the brain separates luminance variations generated by texture and shading into independent components. Here, we sought to gain insight into ways in which the visual system performs this perceptual separation. 
The interaction of light with opaque surfaces is characterized by the bidirectional reflectance distribution function (BRDF). Figure 1 shows graphical renderings of an elongated convex “bump” created with an isotropic diffuse and specular BRDF (Ward, 1992). Diffuse shading of the object is generated by variations in the orientation of surface normals relative to the light source situated to the upper right (A). Specular reflectance (or sheen) is another property of surfaces and directly reflects variations in lighting with the surrounding scene, including the primary light source (B). Pigmentation forms variations in surface reflectance (albedo) with lighter and darker regions divided by oriented edge contours (C). Despite the computational simplicity of forward optics, much remains to be understood about how the visual system separates the different contributions of physical properties to image structure into their physical causes (Barrow & Tenenbaum, 1978). 
Figure 1
 
Texture preserves the underlying shading gradients of surfaces. Upper row: A collimated light source from the upper right was used to render a simple convex surface with pure diffuse shading (A), diffuse and specular shading (B), and diffuse shading and texture (C). The diffuse reflectance was Lambertian and the specular reflectance was rendered using the Cook-Torrance model in Blender 3D. Lower row: shows the directions of the orientation gradients for the separate images generated using the method of Experiment 1 with an even symmetric Gabor filter of size 3 pixels. Insets show the polar orientation of brighter shading intensity for each local anisotropy in shading flow (black indicates local isotropy in shading). Note that the addition of texture generates a web of oriented edge contours that do not influence the direction of shading gradients on either side of their edges.
Figure 1
 
Texture preserves the underlying shading gradients of surfaces. Upper row: A collimated light source from the upper right was used to render a simple convex surface with pure diffuse shading (A), diffuse and specular shading (B), and diffuse shading and texture (C). The diffuse reflectance was Lambertian and the specular reflectance was rendered using the Cook-Torrance model in Blender 3D. Lower row: shows the directions of the orientation gradients for the separate images generated using the method of Experiment 1 with an even symmetric Gabor filter of size 3 pixels. Insets show the polar orientation of brighter shading intensity for each local anisotropy in shading flow (black indicates local isotropy in shading). Note that the addition of texture generates a web of oriented edge contours that do not influence the direction of shading gradients on either side of their edges.
Different sources of image structure can be distinguished on the basis of how they do or do not interact with viewpoint and the structure of the light field. Diffuse (Lambertian) shading flow is viewpoint independent and depends only on interactions between surface shape and the structure of the surrounding light field. Surface regions of high curvature generate strong shading gradients because of the rapid variation in the orientation of normals relative to the light source. Surface regions of low curvature generate similar shading intensities, because the orientations of surface normals relative to the light field vary slowly (Koenderink & van Doorn, 1980; Norman, Todd, & Orban, 2004). The formation of shading gradients is also influenced by the distribution of light across the surface—or illuminance flow—as variations in curvature can occlude regions of the light field, reducing the amount of light that a surface region receives (Pont & Koenderink, 2003; Koenderink, Van Doorn, & Pont, 2007). For a convex surface bump, the surface normals oriented towards the light source receive direct illumination and generate brighter shading, whereas other surface normals receive less light and generate darker shading. Consistent changes in local relief (or relief texture) will interact with illuminance flow to generate these high-contrast bright and dark dipoles oriented in the direction of the primary light source (Pont & Koenderink, 2005). 
In comparison to diffuse shading, specular reflections depend on the orientation of surface normals relative to both the observer and illumination field. Although specular reflections are viewpoint dependent, the orientations of specularly reflected images are constrained by surface curvature (Koenderink & van Doorn, 1980; Norman, Fleming, Torralba, & Adelson, 2004; Norman, Todd, & Orban, 2004). For an anisotropic convex surface region, specular reflections of the environment will compress in the direction of higher curvature, generating elongated contours in the direction of minimal curvature. As shown in Figure 1B, this generative process will tend to cause the edges of specular reflections to run parallel to shading isophotes—contours of identical diffuse shading intensity (Beck & Prazdny, 1981; Fleming et al., 2004; Norman, Todd, & Orban, 2004; Todd, Norman, & Mingolla, 2004; Kim, Marlow, & Anderson, 2011; Marlow, Kim & Anderson, 2011; Kim, Marlow, & Anderson, 2012). 
Surface pigmentation generates contours that do not exhibit the same dependencies on shape and illumination as do diffuse shading and specular reflections. Unlike relief texture, which depends on shape and illuminance flow, pigment texture (hereafter referred to as just “texture”) is formed by local changes in surface reflectance or albedo. Although texture gradients depend on surface slant relative to the observer (Todd & Oomes, 2002; Thaler, Todd, & Dijkstra, 2007; Todd & Thaler, 2010), the orientations of texture edges are largely independent of the gradients generated by shading flow. Due to this independence, the orientations of texture edges—their orientation fields—can be very different to that of the orientations fields generated by shading gradients (Ben-shahar & Zucker, 2001). This is demonstrated by the colored images in Figure 1, which show large angular variations in the orientation fields at sharp texture edges relative to adjacent shading gradients. Because specular highlights tend to follow isophotes in diffuse shading, they will also tend to have orientation fields aligned with those of diffuse shading. This suggests that the structure of the local orientation fields may provide information the visual system uses to separate luminance variations caused by texture from diffuse and specular shading flow. 
In keeping with this hypothesis, several studies have shown that the appearance of luminance variations as pigmentation or specular reflections depends on the orientation of sharp luminance changes relative to superimposed shading gradients (Todd et al., 2004; Anderson & Kim, 2009; Kim et al., 2011; Marlow et al., 2011; Kim et al., 2012). These studies found that destroying the natural alignment of specular reflections with the low frequency gradients generated by diffuse shading transformed their appearance from glossy reflections to variations in pigmentation. This transformation was well modeled by the difference in local orientation fields between the displaced specular reflections relative to diffuse surface shading (Anderson & Kim, 2009; Kim et al., 2011; Marlow et al., 2011). These results support the view that the visual system considers the local orientation of sharp edge contours relative to adjacent shading gradients to infer the presence of multiple contributions to image structure, as well as the physical nature of those sources. This suggests the possibility that the incongruence between the orientation fields generated by diffuse shading and those generated by variations in surface pigmentation play a significant role in the visual system's ability to segment these two sources of image structure more generally, i.e., when both sources are physically present in the scene. 
In addition to considering low-level orientation constraints, the visual system may also rely on assumptions about the structure of the illumination field to extract shading flow (Luckiesh, 1916, 1922; Ramachandran, 1988; van Doorn et al., 2011). Shading exhibits ambiguity in the depth of relief height and the elevation of illumination (a bas relief ambiguity); for example, similar image structure can be generated by an obliquely lit shallow surface bump as a taller surface bump illuminated more frontally. This results in ambiguity concerning the illumination direction and local 3-D surface curvature. In highly ambiguous contexts, the visual system appears to impose a bias that the primary light source comes from above (Metzger, 1975; Ramachandran, 1988; Mamassian & Goutcher, 2001; van Doorn et al., 2011), which is a statistical regularity of natural scenes. Spherical illumination mapping of typical outdoor scenes has shown that illumination intensity generally increases with increasing elevation (Dror, Willsky, & Adelson, 2004; Mury, Pont, & Koenderink, 2009).These observations together are consistent with the view that the visual system considers information concerning the primary illumination direction when parsing luminance variations into their physical causes. 
The purpose of the experiments described herein is to examine the effects of “low-level” constraints of orientation field congruence in limiting the ability to extract the image components generated by diffuse shading and surface pigmentation. The experiments also assess the perceptual effects of “mid-level” constraints on the structure of the light field, where we expected that it would be more difficult to extract these distinct image components for nongeneric light fields (i.e., where the predominant illumination source is not from above). 
Experiment 1
The goal of Experiment 1 was to test whether the visual system uses the structure of local orientation fields to separate texture from diffuse shading. According to this view, texture should be most confusable with diffuse shading when the edges of the texture share the same orientation as the diffuse shading gradients. Texture should be easiest to detect when the edges of the texture are orthogonal to diffuse shading gradients. Experiment 1 tested this idea by manipulating the orientation of a texture on a planar surface with anisotropic relief. A similar pattern of anisotropic noise was used to generate the texture and the surface relief, so that the edges of the texture would have a similar orientation to the diffuse shading gradients. The texture was then parametrically rotated relative to the surface to generate orientation incongruencies between texture edges and diffuse shading gradients. We predicted that the surfaces will appear more compellingly textured for the rotated textures than the aligned textures if the visual system uses orientation incongruencies to discriminate texture from shading. 
We also used oriented image filters to measure the orientation incongruence between sharp edges and adjacent luminance gradients that our stimuli were designed to modulate. These filters computed the orientation field of the image, and then the difference in orientation fields between adjacent pixels. Whereas aligned textures should generate small differences in orientation between texture edges and adjacent shading gradients, rotated textures should generate comparatively larger differences. 
Method
Observers
A total of nine first-year psychology students participated in the study. All had normal or corrected-to-normal vision. Their participation in the study was subject to approval obtained from the Human Research Ethics Committee (HREC) at the University of Sydney. All procedures adhered to the Declaration of Helsinki. 
Stimuli
The stimuli were images of a 3-D planar surface with spatially varying albedo rendered in a natural light field. The 3-D surface geometry was constructed from a square mesh of 66,053 vertices set fronto-parallel to the observer at a distance of 50 cm (9° visual angle). Relief was introduced into the plane by displacing each vertex along the global z axis (i.e., along the observer's viewing direction). The displacement distance for each vertex ranged in amplitude up to 20% of the surface width and was determined by the values of an anisotropic pattern (i.e., a displacement map). The displacement map was generated in three steps. First, a pattern of isotropic noise with a 1/f power spectrum was binarized by replacing every value greater than the mean with 1.0 and every value equal to or less than the mean with 0.0. Second, the sharp edges in the binarized noise were blurred by convolving the binarized noise with a 2-D Gaussian blur kernel with a radius of 16.0 pixels. Third, the blurred noise was anisotropically filtered by computing its vertical directional derivative using the emboss tool in Adobe Photoshop. Figure 2A shows the resulting displacement map used to generate relief, which was cropped by 50% to create the inset outlined in red. The procedure was repeated to generate noise patterns with different scales (50%, 100%, and 200%), where the 100% scale was used to generate surface relief. 
Figure 2
 
Method used to create displacement maps and surface texture. (A) An initial image was generated by applying a directional derivative to a cloud noise pattern. The derived image was subdivided into quadrants to obtain four displacement maps, each with globally similar shading flow to the others (e.g., as outlined in red). A new noise basis with the same statistical properties was generated and subdivided twice and binarized to produce three scales of surface texture (50%, 100%, and 200%). (B) Displacement maps were used to create 3-D surface relief by increasing the height of the surface according to the intensity of the derived images. Binary texture maps were used to multiplicatively increase surface luminance by 20% in Blender 3D. Black texture regions did not alter surface luminance. To increase texture incongruence with shading (as shown in C), the texture maps were rotated by known angles (θ) before the local surface luminance was increased.
Figure 2
 
Method used to create displacement maps and surface texture. (A) An initial image was generated by applying a directional derivative to a cloud noise pattern. The derived image was subdivided into quadrants to obtain four displacement maps, each with globally similar shading flow to the others (e.g., as outlined in red). A new noise basis with the same statistical properties was generated and subdivided twice and binarized to produce three scales of surface texture (50%, 100%, and 200%). (B) Displacement maps were used to create 3-D surface relief by increasing the height of the surface according to the intensity of the derived images. Binary texture maps were used to multiplicatively increase surface luminance by 20% in Blender 3D. Black texture regions did not alter surface luminance. To increase texture incongruence with shading (as shown in C), the texture maps were rotated by known angles (θ) before the local surface luminance was increased.
The same anisotropic noise pattern generating the surface relief was also used to produce the spatially varying texture (albedo). The texture pattern was created by binarizing the blurry anisotropic noise pattern generating the relief. The binarization replaced values greater than the mean with an albedo value of 0.6 and replaced values less than or equal to the mean with an albedo value of 0.5. The texture pattern was mapped orthographically onto the surface to assign albedo values to each surface point. The orientation of the texture pattern, θ, was either 0°, 15°, 45°, or 90° relative to the surface. The spatial scale of the texture pattern was either the same as the pattern generating the relief, half the scale of the relief, or twice the scale of the relief. 
The surfaces were rendered using the Lambertian reflectance model with a light probe obtained photographically from a natural scene (Debevec, 2002). We used the “Eucalyptus Grove” light probe because it has a similar vertical anisotropy on average to the 3-D surface relief. This alignment in the generative space ensured the resulting direction of diffuse shading gradients shared the same predominant vertical orientation in the image as the unrotated texture flow pattern. 
Figure 3 shows the resulting stimuli with different texture scales combined with relief in different orientations. The surface geometry is identical across the images. The orientation of the texture varies between the vertical columns (aligned with the surface relief in the left vertical column of images and orthogonal to the surface relief in the right vertical column). The spatial scale of the texture increases from top to bottom between horizontal rows and matches the spatial scale of the surface's relief in the middle row. Stimulus images were presented on a 19 in. Mitsubishi Diamond Plus Super Bright monitor (0.04 to 96.0 cd/m2 luminance range approximately). 
Figure 3
 
Planar surfaces with relief and texture added in different orientations. Surface relief height was created using a directionally derived cloud texture. The surface geometry was textured with the binarized cloud texture. The orientation of the resulting texture was rotated over successive angles within the image plane (along rows: 0°, 15°, 45°, and 90°). Texture scales increased between image sets (down each column) for each texture orientation.
Figure 3
 
Planar surfaces with relief and texture added in different orientations. Surface relief height was created using a directionally derived cloud texture. The surface geometry was textured with the binarized cloud texture. The orientation of the resulting texture was rotated over successive angles within the image plane (along rows: 0°, 15°, 45°, and 90°). Texture scales increased between image sets (down each column) for each texture orientation.
Psychophysical task
We measured the visibility of the spatially varying albedo of the surfaces using a two-alternative forced choice (2AFC) paired comparison task. Observers viewed two stimuli presented side-by-side on the screen and were instructed to select the surface in which the two colors of the pigment were most visible over the entire surface. Each of the 12 stimuli in Figure 3 was compared with each of the other three stimuli in the same horizontal row of images, so that observers always compared surfaces with the same scale of texture. There were three blocks of trials; each block contained 12 counterbalanced comparisons for the four levels of texture orientation (4 × 4 − 4 = 12) repeated three times using images with the same parameters, but different randomized relief and texture (36 trials per block). The three blocks of trials were performed separately for each of the three texture scales, producing a complete set of 108 trials (36 × 3). A 5-min rest was provided between each test block. Responses were recorded using the left and right arrow keys on a standard PC keyboard. Prior to the experiment, observers were shown examples of real textured surfaces, such as marble and granite, that contain more than one color or shade of gray to clarify the task. 
Data analysis
We estimated the probability that a surface in a given image was perceived as more pigmented by dividing the number of times an image was selected by the total number of times it was presented. A repeated-measures analysis of variance (ANOVA) was used to determine whether there were main effects of texture scale, texture orientation, and interactions between scale and orientation on perceived surface pigmentation. We arc-sine transformed the probability data prior to running the ANOVA to control for homogeneity of variance. 
Results and discussion
Figure 4 plots the mean probability estimates of perceived surface pigmentation as a function of the relative orientation of the texture and diffuse shading. These data show a general increase in perceived pigmentation with increasing angular offset of texture relative to diffuse shading. For surfaces with full scale texture, a repeated-measures ANOVA on the arc-sine transformed data found a significant effect of texture offset on perceived surface pigmentation, F(3, 24) = 6.23, p < 0.005, evident in the general increase in perceived pigmentation for all of the three colored curves with increasing orientation. There was no significant effect of texture scale on perceived surface pigmentation, F(2, 16) = 0.174, p = 0.207. There was a significant interaction effect between texture offset and scale on perceived surface pigmentation, F(6, 48) = 2.41, p < 0.05, evident in the decline in slope of curves with decreasing texture frequency. For matte surfaces with half scale relief, a repeated-measures ANOVA on the arc-sine transformed data found a significant effect of texture offset on perceived surface pigmentation, F(3, 24) = 9.67, p < 0.0005, again evident in the general increase of the three colored curves with increasing orientation. There was no significant effect of texture scale on perceived surface pigmentation, F(2, 16) = 0.074, p = 0.929. There was a significant interaction effect between texture offset and scale on perceived surface pigmentation, F(6, 48) = 2.38, p < 0.05, again evident in the decline in slope of curves with decreasing texture frequency. 
Figure 4
 
Mean probability estimates of perceived pigmentation of surfaces with half-scale (A) and full-scale relief (B). Images in upper sections are sample renderings of matte surfaces with textures at 0° and 90° of offset. The mean probability that a surface was selected as more pigmented is plotted below as a function of physical texture offset (in degrees) relative to diffuse shading. Black points and lines show data for textures with high-frequency texture. Red points and lines show data for textures same-frequency texture, whereas blue points and lines show data for textures with low-frequency relief. Error bars indicate standard errors of the mean.
Figure 4
 
Mean probability estimates of perceived pigmentation of surfaces with half-scale (A) and full-scale relief (B). Images in upper sections are sample renderings of matte surfaces with textures at 0° and 90° of offset. The mean probability that a surface was selected as more pigmented is plotted below as a function of physical texture offset (in degrees) relative to diffuse shading. Black points and lines show data for textures with high-frequency texture. Red points and lines show data for textures same-frequency texture, whereas blue points and lines show data for textures with low-frequency relief. Error bars indicate standard errors of the mean.
In order to understand this pattern of data, we first computed the contrast of edges in the image as shown in Figure 5. We computed the local energy (E) of each stimulus image by taking the square root of the sum of squared differences between adjacent pixel intensities in the horizontal and vertical directions (Equation 1). Mean edge contrast was then computed as the average local energy above a specific threshold in the image (Equation 2). The higher thresholds in Figure 5 restrict the contrast calculation to pixels corresponding to potential texture edges. The average contrast value of the pixels identified as edges is plotted in Figure 5 as a function of texture offset angle. The different colored curves depict the average energy of edge pixels identified with a low, moderate, or high contrast threshold. Note that there is no observable variation in mean edge contrast across changes in texture orientation. This invariance in the mean contrast of texture edges was confirmed by an ANOVA performed using the randomized repeat stimulus sets in the current experiment. There was no significant main effect of texture orientation on edge contrast, F(3, 6) = 0.04, p = 0.987. There was also no significant interaction effect on edge contrast between texture offset and the chosen gradient threshold to isolate edges from diffuse shading gradients, F(6, 12) = 0.46, p = 0.827. These results suggest that variation in texture edge contrast does not account for the perception of surface texture in the current experiment.   where E(x, y) > threshold. 
Figure 5
 
Model output from edge contrast measurements. (A) Transformed images showing the root-mean-squared (RMS) magnitude of shading gradients across the front region of a matte surface with rotated texture. Intensity corresponds to the omnidirectional strength of the shading gradient at each pixel location. Different threshold values were set to shallower shading gradients. (B) Mean edge contrast is plotted as a function of texture offset (in degrees) for the three threshold levels of identified edge. Standard deviations outlined in broken lines were determined from the pool of images rendered for repeat stimulus conditions.
Figure 5
 
Model output from edge contrast measurements. (A) Transformed images showing the root-mean-squared (RMS) magnitude of shading gradients across the front region of a matte surface with rotated texture. Intensity corresponds to the omnidirectional strength of the shading gradient at each pixel location. Different threshold values were set to shallower shading gradients. (B) Mean edge contrast is plotted as a function of texture offset (in degrees) for the three threshold levels of identified edge. Standard deviations outlined in broken lines were determined from the pool of images rendered for repeat stimulus conditions.
Rather than altering the strength of edge contrast per se, texture rotations generate differences in the local orientation between pigment edges and adjacent diffuse shading flow. As shown in Figure 6, we measured the orientation of each pixel using balanced single-cycle even-symmetric Gabors. We tested four Gabor scales in which the Gaussian envelope had a width of 3.0, 7.0, 11.0, and 25.0 pixels (approximately 1% to 5% of image width). The relative responses of horizontal and vertical Gabors at each pixel location were used in Equation 3 to compute the orientation (θ) of the maximum luminance gradient in the image (colored image in Figure 6A). We discarded the sign of the local contrast polarity (0° to 360°) by collapsing the orientation of the luminance gradient to a 0° to 180° range. The difference in the orientation between adjacent pixels was then computed between vertically and horizontally adjacent pixels (Equations 4 and 5). This difference in orientation was then squared and summed, the square root of which provided an index of local orientation incongruence (Equation 6)—the local variation in orientation between adjacent pixels (0° to 90°). The mean orientation incongruence was then computed within a central portion of the surface.      
Figure 6
 
Orientation field responses to images of matte surfaces with texture added in different orientations. (A) Raw images of relief with added texture were filtered using 2-D horizontal and vertical even symmetric Gabor filters that were single cycle and one of four different sizes (3, 7, 13, or 25 pixels). The vector specifying the orientation of the maximum shading gradient was determined in polar coordinates (colored image). (B) Raw images of surfaces with texture applied in different orientations (angular value shown) and their orientation-field transforms (colored). Greater abundance of local variation in orientation is coded by increasing intensity within the central square regions of interest.
Figure 6
 
Orientation field responses to images of matte surfaces with texture added in different orientations. (A) Raw images of relief with added texture were filtered using 2-D horizontal and vertical even symmetric Gabor filters that were single cycle and one of four different sizes (3, 7, 13, or 25 pixels). The vector specifying the orientation of the maximum shading gradient was determined in polar coordinates (colored image). (B) Raw images of surfaces with texture applied in different orientations (angular value shown) and their orientation-field transforms (colored). Greater abundance of local variation in orientation is coded by increasing intensity within the central square regions of interest.
Figure 7 plots the mean orientation incongruence as a function of texture offset for high and moderate relief surfaces. The different color series in Figure 7 correspond to the four Gabor sizes tested. Note that for all Gabor sizes, orientation incongruence increases with texture offset, which rotates the texture edges out of alignment with the flow fields of diffuse shading. These model data from orientation field variations are consistent with the general increase in perceived pigmentation of objects depicted in these images. We pooled the model data by linearly averaging outputs across the four filter sizes and two levels of relief at each texture orientation level. We computed the correlation between the pooled model data and the psychophysical data pooled by linear averaging probabilities across relief and texture scales. We found that the model accounted for 98% of the variance in the means of the psychophysical data plotted in Figures 4 and 7 (R2 = 0.98). These psychophysical and model data together are consistent with the view that local variations in orientation fields are used to separate shading gradients from texture flow generated by planar surfaces. 
Figure 7
 
Model output of orientation field responses to images of surfaces with added texture. Plots show mean variation in polar orientation plotted as a function of global texture offset (degrees). Note how texture rotations increase the variation between the orientation gradients for surfaces with different relief. Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 7
 
Model output of orientation field responses to images of surfaces with added texture. Plots show mean variation in polar orientation plotted as a function of global texture offset (degrees). Note how texture rotations increase the variation between the orientation gradients for surfaces with different relief. Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Experiment 2
The results of the previous experiment suggested that perceived surface pigmentation depends on the orientation of texture edges relative to the shading flow of diffuse reflectance. Our modeling suggests that the visual system may segment texture edges from shading flow by analyzing the angular incongruence of sharp edge contours relative to orientation fields generated by shading gradients. However, the shading generated by the planar bumpy surfaces depends primarily on surface pose relative to the dominant illumination direction, which in these images is from above. Most surfaces tend not to be planar and generally exhibit significant variations in both local and global curvature. The shading of these curvatures will depend on surface pose and variations in the distribution of light intensity across the surface (illuminance flow). This added complexity may complicate the task of separating luminance variations caused by texture from shading flow. In Experiment 2, we tested the extent to which the dependence of perceived pigmentation on the orientation of edge contours generalized to non-planar surfaces. 
Method
Observers
A total of 10 first-year psychology students participated in the study. All had normal or corrected-to-normal vision. Their participation in the study was subject to approval obtained from the Human Research Ethics Committee (HREC) at the University of Sydney. 
Stimuli
Stimuli were generated by rendering images of a 3-D bumpy spherical surface with the Eucalyptus Grove light field. The average primary illumination direction was from above, which generated vertical shading gradients. The base 3-D mesh that defined the surface was a geodesic sphere consisting of 163,846 vertices. Bumps were generated by displacing the vertices in a direction perpendicular to the tangent plane of the sphere by up to 20% of the surface's diameter. The displacement distance for each vertex was determined by the same displacement map used in Experiment 1. The spatially varying albedo was also determined by the same texture maps used in Experiment 1. The displacement and texture maps were projected onto the surface using flat mapping in Blender 3D. Flat mapping projects equal areas of the maps onto equal areas of the spherical mesh. Stimulus images were rendered as in Experiment 1, and displayed at the same scale on the same display. Figure 8 shows samples of the stimuli used in the experiment. 
Figure 8
 
Matte renderings of spherical surfaces with relief and added texture. Surface relief height was created using the directional displacement maps of Experiment 1. The surface geometry was pigmented by applying the same binarized textures used in the previous experiment. Column values indicate the orientation of the texture around the viewing axis in degrees. Texture scale varies between the rows as a percentage of texture scale used to create surface geometry.
Figure 8
 
Matte renderings of spherical surfaces with relief and added texture. Surface relief height was created using the directional displacement maps of Experiment 1. The surface geometry was pigmented by applying the same binarized textures used in the previous experiment. Column values indicate the orientation of the texture around the viewing axis in degrees. Texture scale varies between the rows as a percentage of texture scale used to create surface geometry.
Procedure
All procedures were identical to those of Experiment 1. Observers were instructed to base their judgments of perceived surface pigmentation on the surface as a whole. This not only kept the task consistent with Experiment 1 but also encouraged observers to consider the global coverage of pigmentation across the surface, including surface regions that generate foreshortening. 
Data analysis
Psychophysical data were analyzed the same way as in Experiment 1. The orientation field model was again applied to surface regions within the bounding contour of the surfaces. A suitable mask image was used to restrict the analysis of orientation fields to avoid the occlusion boundaries. 
Results and discussion
Figure 9 plots the mean probability estimates of perceived surface pigmentation as a function of angular texture offset in degrees relative to diffuse shading. The data show a general increase in perceived pigmentation with increasing angular offset of textures relative to diffuse shading. A repeated-measures ANOVA on the arc-sine transformed probability data found a significant effect of angular offset in texture on perceived surface pigmentation, F(3, 27) = 6.21, p < 0.005. There was no significant effect of texture scale on perceived surface pigmentation, F(2, 18) = 1.09, p = 0.356. There was a significant interaction effect between the angular offset and scale of texture on perceived surface pigmentation, F(6, 54) = 2.67, p < 0.05. 
Figure 9
 
Mean probability estimates of perceived pigmentation on bumpy spheres. Images show example surfaces with 90° texture rotation. Perceived strength of apparent texture of the surfaces is plotted as a function of physical texture offset relative to diffuse shading. Black points and lines show data for surfaces with high-frequency texture. Red points and lines show data for surfaces with textures that had the same scale as relief, whereas blue points and lines show data for surfaces with half-frequency textures. Error bars indicate standard errors of the mean.
Figure 9
 
Mean probability estimates of perceived pigmentation on bumpy spheres. Images show example surfaces with 90° texture rotation. Perceived strength of apparent texture of the surfaces is plotted as a function of physical texture offset relative to diffuse shading. Black points and lines show data for surfaces with high-frequency texture. Red points and lines show data for surfaces with textures that had the same scale as relief, whereas blue points and lines show data for surfaces with half-frequency textures. Error bars indicate standard errors of the mean.
Figure 10 shows the mean variation in orientation between adjacent pixels plotted as a function of texture orientation. The different colored series in the figure depict the results obtained with different Gabor filter sizes. Variation in local orientation fields increased with increasing angular offset of texture relative to diffuse shading. For each texture scale, we pooled the model data by linear averaging responses of the four filter sizes for images with the same texture orientation. We correlated the pooled model data against psychophysical data pooled by linear averaging across conditions with the same texture scale and orientation. We found that the model data accounted for 86% of the variability in psychophysical responses (R2 = 0.86). This relationship was slightly weaker than that obtained with planar surfaces in the previous experiment. It is possible that the difference relates to the foreshortening of shading gradients in images of spheres, which might indicate a dependence on nonlinear pooling of low-level filter responses across different spatial scales. The effect of texture rotation on increasing orientation field variability is most linear for responses obtained with larger oriented filters, which most closely resemble the psychophysical data obtained with moderate frequency textures. However, the model responses of all filter sizes (Figure 10) did not fully capture the pattern of data at the low texture scale (blue curve in Figure 9). 
Figure 10
 
Model output of orientation field responses to images of bumpy spheres with different texture scales. Plot shows mean variations in shading orientation plotted as a function of global texture offset (degrees). Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Across different size filters, there is a general increase in shading variability with increasing texture offset relative to diffuse shading. However, note how the relationship declines with increasing texture scales (i.e., lower texture frequencies), roughly following the decline observed in the psychophysical data of in Figure 9. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 10
 
Model output of orientation field responses to images of bumpy spheres with different texture scales. Plot shows mean variations in shading orientation plotted as a function of global texture offset (degrees). Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Across different size filters, there is a general increase in shading variability with increasing texture offset relative to diffuse shading. However, note how the relationship declines with increasing texture scales (i.e., lower texture frequencies), roughly following the decline observed in the psychophysical data of in Figure 9. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
The results show that perceived surface pigmentation depends on the orientation of texture edges relative to diffuse shading flow for both planar and spherical surfaces. We found an interaction effect between the scale and orientation of texture on perceived pigmentation, which may again be explained by the greater number of reflectance edges in the high-density textures. However, the orientation filter model accounted for less variation in the psychophysical data, compared with the previous experiment. Also, none of the filter sizes accounted for the psychophysical data obtained with low density textured spheres. It is possible that the psychophysical responses might be explained by a dependence on other constraints generated by the complex interactions between illuminance flow and the multiscale 3-D shapes used here. 
Experiment 3
In Experiments 1 and 2, we found that perceived surface pigmentation depends on the orientation of texture flow relative to diffuse shading gradients generated by the interaction of illumination with planar and spherical 3-D surfaces. Orientation-field modeling suggested that the visual system could estimate pigmentation based on relatively low-level visual computations. However, it is possible that the perception of pigmentation on bumpy surfaces depends on additional midlevel constraints, such as biases in the light field. As shown in Figure 11, textured planes exhibit a bas relief ambiguity whereby a concave surface region in the image of an upright planar surface appears convex when the image is inverted. Spherical surfaces do not exhibit the same ambiguity in bas relief because their global curvature constrains the pattern of local shading dipoles due to foreshortening, which can improve perceptual estimates of shape and illuminance flow (Langer & Bulthoff, 2001; Liu & Todd, 2004; Koenderink et al., 2007). 
Figure 11
 
Effect of image orientation on perceived texture of planar surfaces with relief. Note that the same region of the upright surface indicated by the white arrow (left) appears sunken, compared with the same region in the inverted surface (right), which now appears raised.
Figure 11
 
Effect of image orientation on perceived texture of planar surfaces with relief. Note that the same region of the upright surface indicated by the white arrow (left) appears sunken, compared with the same region in the inverted surface (right), which now appears raised.
In Experiment 3, we examined the possibility that it would be more difficult to compute shading for nongeneric light fields that do not conform to the upright anisotropic structure of most natural scenes. We rotated images of spherical surfaces up to 180° to violate the illumination from above bias at varying degrees. In comparison, similar rotations applied to planar surfaces should have a comparatively smaller impact due to the bas relief ambiguity. These image rotations also had the advantage of preserving all the luminance gradients within the image. If the computation of shading depends on an illumination from above bias, then it should be more difficult to separate shading and texture flow when the illumination field is inverted versus upright. Alternatively, if low-level orientation field differences principally account for the separation of shading and texture flow, then perceived pigmentation should remain invariant across changes in image orientation. 
Method
Observers
A total of 14 first-year psychology students participated in the study. All had normal or corrected-to-normal vision. Their participation in the study was subject to approval obtained from the Human Research Ethics Committee (HREC) at the University of Sydney. 
Stimuli
Stimuli were the same planar and spherical objects used in Experiments 1 and 2. To minimize the duration of the experiment, only textures of the same or higher frequency as those used to create relief were used here. As shown in Figure 12, images were rotated over three levels in clockwise orientation (0°, 90°, and 180°). We also only used texture offsets relative to diffuse shading of 0° and 90°. 
Figure 12
 
Different texture orientations on matte bumpy planes and spheres. (A) Upright images of planar and spherical surfaces with relief and texture oriented at 0° (left in pair) and 90° (right in pair). (B) The same images in A, but rotated 90° clockwise. (C) The same images in A, but rotated 180° (i.e., completely inverted).
Figure 12
 
Different texture orientations on matte bumpy planes and spheres. (A) Upright images of planar and spherical surfaces with relief and texture oriented at 0° (left in pair) and 90° (right in pair). (B) The same images in A, but rotated 90° clockwise. (C) The same images in A, but rotated 180° (i.e., completely inverted).
Procedure
All procedures were identical to those of Experiments 1 and 2. Observers were instructed to base their judgments of perceived surface pigmentation on the surface as a whole and to not limit their judgments to comparisons made between any one of the surface regions. This instruction was given to encourage observers to perform a global assessment of surface texture, rather than concentrating on a single neighborhood of edge contours. Images of planar and spherical surfaces were presented on separate 2AFC sessions performed in counter-balanced order. Within each session there was a total of (6 × 6 – 6) × 3 = 90 trials (six stimulus images and three sets of surface with different random geometries and textures). In order to further facilitate the global judgment of surface pigmentation, images were presented for 2 s on each trial. 
Results and discussion
Figure 13 plots the mean probability estimates of perceived surface pigmentation as a function of global image orientation. For planar surfaces with low-frequency texture, a repeated-measures ANOVA on the arc-sine transformed data found no significant main effect of image orientation on perceived surface pigmentation, F(2, 26) = 2.17, p = 0.14. There was a significant main effect of texture orientation relative to shading on perceived pigmentation, F(1, 13) = 19.22, p < 0.001, but no interaction effect between texture and image orientation, F(2, 26) = 0.32, p = 0.73. For planar surfaces with high-frequency texture, another repeated-measures ANOVA on the arc-sine transformed data found no significant main effect of image orientation on perceived surface pigmentation, F(2, 26) = 1.96, p = 0.16. There was a significant main effect of texture orientation relative to shading on perceived pigmentation, F(1, 13) = 35.71, p < 0.00005, but no interaction effect between texture and image orientation, F(2, 26) = 1.56, p = 0.23. 
Figure 13
 
Effect of image orientation on perceived pigmentation of planar surfaces (top) and bumpy spheres (bottom). Solid points show estimates for perceived pigmentation of aligned textures (0°), whereas open points show estimates of rotated textures (90°). Different colors are used to show data obtained with high-frequency texture (A and C in black points and lines) or same-frequency textures (B and D in red points and lines) as the underlying relief. Error bars indicate standard errors of the mean.
Figure 13
 
Effect of image orientation on perceived pigmentation of planar surfaces (top) and bumpy spheres (bottom). Solid points show estimates for perceived pigmentation of aligned textures (0°), whereas open points show estimates of rotated textures (90°). Different colors are used to show data obtained with high-frequency texture (A and C in black points and lines) or same-frequency textures (B and D in red points and lines) as the underlying relief. Error bars indicate standard errors of the mean.
For spherical surfaces with low-frequency texture, a repeated-measures ANOVA on arc-sine transformed probability estimates found a significant main effect of image orientation on perceived pigmentation, F(2, 26) = 10.15, p < 0.001. There was also a significant main effect of texture orientation relative to shading on perceived pigmentation, F(1, 13) = 22.38, p < 0.0005, but no interaction effect between texture and image orientation, F(2, 26) = 2.00, p = 0.16. For spherical surfaces with high-frequency texture, another repeated-measures ANOVA on the arc-sine transformed probability estimates again found a significant main effect of image orientation on perceived pigmentation, F(2, 26) = 11.54, p < 0.0005. Perceived pigmentation also showed a significant dependence on the relative orientation of the texture and the shading, F(1, 13) = 8.44, p < 0.05, which did not interact significantly with image orientation, F(2, 26) = 0.80, p = 0.46. 
These results replicate the effects of texture orientation on perceived surface pigmentation found in the previous experiments. This finding was evident in the significant differences in perceived pigmentation between rotated and unrotated textures on planar and spherical surfaces. However, we also found that inverting images significantly reduced the salience of perceived texture. This was further verified by follow-up Bonferroni-corrected contrasts, which found a significant difference in perceived texture between upright images and inverted images (i.e., between image orientations at 0° and 180°) of bumpy spheres, t(13) = 2.96, p < 0.05, but not bumpy planes, t(13) = 0.99, p > 0.05. This suggests that the appearance of surface texture depends not only on low-level information about local shading gradients but also on constraints that embody illumination biases. All of the image structure was preserved across changes in image orientation. The decline in perceived pigmentation for inverted spheres is consistent with increased incompatibility between an illumination from above bias and shading flow consistent with inverted illumination. This is consistent with the view that violating the light from above assumption complicates the task of computing shading, which makes it more difficult to segment pigmentation from shading. The lack of decline in perceived pigmentation between upright and inverted planes may be explained by perceptual resolution of the ambiguity in local surface concavity/convexity by a perceived illumination from above. 
Experiment 4
The results so far suggest that the appearance of surface texture depends not just on local orientation field computations, but also on midlevel computations constrained by an illumination from above bias. Whereas the results of Experiments 1 and 2 suggested that the visual system could estimate texture based on variation in low-level orientation fields, the psychophysical results of Experiment 3 suggested that the visual system considers the structure of the illumination environment when inferring shading and texture flow. Violation of the light source from above bias may impede the ability to parse luminance variations into separate components of shading and texture flow. Although the physical contrast of the texture was held constant in the preceding experiment, perceived surface pigmentation declined when the direction of the light source was inconsistent with illumination from above. 
In Experiment 4, we assessed the effects of the light field orientation on perceived pigmentation by systematically varying image orientation and texture contrast. We performed two experiments. In the first (4a), we examined the effect of changing the orientation of images on the perceived global contrast of surface texture. In the second (4b), we created an objective task by manipulating the global distribution of texture contrast by regionally attenuating texture intensity and requiring observers to detect the image that contained the texture with globally higher contrast. This requires observers to make an objective judgment because the increased attenuation forces global contrast to be physically altered. For any one trial, there was a correct and incorrect response that could be made by the observer. If the violation of an illumination from above bias disrupts the regional separation of texture flow from shading, then observers should find it more difficult to discriminate the low and high contrast textures in the inverted illumination relative to the when the illumination comes from above. 
Method
Observers
A total of 15 first-year psychology students and two authors (JK and PM) participated in the experiment. All had normal or corrected-to-normal visual acuity. Their participation in the study was subject to approval obtained from the Human Research Ethics Committee (HREC) at the University of Sydney. 
Stimuli
The 3-D geometries were the same spherical objects used in Experiments 2 and 3. To minimize the duration of the experiment, we only used textures that were oriented at 90° to surface relief. In Experiment 4a, separate image sets were created by rotating images away from upright (0°, 45°, 90°, 135°, and 180°) for surfaces with each of the three texture scales. We manipulated global texture contrast uniformly by increasing the texture amplitude in Blender 3D multiplicatively by increasing the texture intensity scalar multiplier over three levels (+0.05, +0.10, and +0.20). 
In Experiment 4b, we introduced regional variations using a cloud noise map (see Figure 14). The luminance values of the pink (cloud) noise were subtracted from the original texture images to attenuate local texture contrast non-linearly throughout the image. The intensity range of the noise pattern above zero was increased to generate greater levels of regional attenuation in the resulting texture maps. This reduced the reflectance of the lighter component of the pigment texture toward that of the darker (unchanged) pigmentation. Seven levels of attenuation were imposed for each set of surface renderings (0.0, 0.25, 0.375, 0.5, 0.625, 0.75, 1.0). Textures were always at 90° in orientation and twice the frequency relative to relief. We rendered three random repeat sets of images with different geometries and texture noise bases. 
Figure 14
 
Method used to vary texture contrast regionally throughout the image. Upper row: the “Original” texture used in the previous experiment was attenuated by subtracting the luminance values of a cloud “Noise” texture from the original. The amplitude of the noise pattern was multiplicatively scaled (0.0 to 1.0) to vary the level of attenuation. Middle row: Upright images of surfaces with increasing regional attenuation (0.0, 0.5, 1.0). Lower row: Inverted images of surfaces with the same attenuation levels.
Figure 14
 
Method used to vary texture contrast regionally throughout the image. Upper row: the “Original” texture used in the previous experiment was attenuated by subtracting the luminance values of a cloud “Noise” texture from the original. The amplitude of the noise pattern was multiplicatively scaled (0.0 to 1.0) to vary the level of attenuation. Middle row: Upright images of surfaces with increasing regional attenuation (0.0, 0.5, 1.0). Lower row: Inverted images of surfaces with the same attenuation levels.
Procedure
For Experiment 4a, 12 first-year psychology students performed psychophysical tasks with stimuli that varied uniformly in texture contrast. Procedures were identical to those of Experiment 3. However, observers were instructed to select surfaces that “appear to have greater contrast in pigmentation on average across the surface as a whole.” This instruction encouraged observers to not limit their judgment to specific regions on the surface, but rather, the global contrast of the texture across the surface. 
For Experiment 4b, four other first-year psychology students and two authors performed tasks with stimuli that varied in regional attenuation. The method of constant stimuli was used to vary the (contrast) strength of the texture mask. In one block of trials, upright images for each level of texture attenuation were randomly paired with an upright version of the surface with an attenuation of 0.5 (the reference image). In the alternate block of trials, each level of texture attenuation in upright images was randomly paired with an inverted version of the surface with an attenuation of 0.5. Presentation of trials was counterbalanced, as was the order of images presented on either side of the display. Repeats were performed using images with the three unique surface geometries. Each unique pairing of images was presented to each observer a total of 12 times within each block of trials. Estimates of the point of subjective equality (PSE) for each of these two curves were obtained after fitting the Weibull function to the psychometric data. 
Results and discussion
For stimuli that varied uniformly in texture contrast (Experiment 4a), Figure 15 plots the probability that a surface was selected as having greater perceived contrast as a function of image orientation in degrees. The data show comparisons across image orientations within each texture contrast level and do not reflect comparisons between different levels of texture contrast. A repeated-measures ANOVA on the arc-sine transformed probability estimates found a significant main effect of image orientation on perceived surface contrast, F(4, 44) = 5.60, p < 0.001. There was no significant interaction effect between texture scale and image orientation on perceived texture contrast, F(8, 88) = 0.63, p = 0.75. Data were subsequently pooled by linearly averaging probabilities across texture scales for each level of image orientation. A follow-up t test found a significant decline in mean probability estimate of perceived texture contrast between the 0 and 180 image orientation conditions, t(11) = 3.72, p < 0.005. 
Figure 15
 
Perceived global texture contrast decreases with image orientation. Increasing the orientation of the image between 0° and 180° reduces perceived image contrast. Separate colors indicate data obtained with different levels of texture contrast (low, medium, high). Data traces only show comparisons made within each contrast level. Error bars are standard errors of the mean.
Figure 15
 
Perceived global texture contrast decreases with image orientation. Increasing the orientation of the image between 0° and 180° reduces perceived image contrast. Separate colors indicate data obtained with different levels of texture contrast (low, medium, high). Data traces only show comparisons made within each contrast level. Error bars are standard errors of the mean.
Figure 16 plots the probability that a given image with a specific attenuation level was selected as appearing to have greater global contrast relative to an upright (black) or inverted (red) reference image (Experiment 4b). A repeated-measures t test found that the mean PSE for comparisons involving the inverted reference (M = 0.56, SE = 0.02) was significantly higher than the mean PSE for comparisons involving the upright reference (M = 0.46, SE = 0.02), t(5) = 3.92, p < 0.05. Taken together, the data obtained here support the view that perceived global texture contrast depends on consistency of shading flow with a light source from above bias. Rotating the illumination direction away from directly above reduced perceived texture contrast, even though images were otherwise completely identical. Violating the light source from above prior in the more objective task had the effect of increasing the PSE for contrast discrimination of surface textures. These findings suggest that the visual system considers the global illumination direction when separating texture flow from shading caused by interactions between shape and illumination. 
Figure 16
 
Increasing the attenuation level of texture detail monotonically reduces the global appearance of surface texture. Black points and lines show data obtained when comparing upright images of spherical surfaces against an upright reference image with an attenuation level of 0.5. Red points and lines show data obtained when comparing upright images against an inverted version of the same reference image. Error bars show standard errors of the mean.
Figure 16
 
Increasing the attenuation level of texture detail monotonically reduces the global appearance of surface texture. Black points and lines show data obtained when comparing upright images of spherical surfaces against an upright reference image with an attenuation level of 0.5. Red points and lines show data obtained when comparing upright images against an inverted version of the same reference image. Error bars show standard errors of the mean.
General discussion
The present study sought to gain insight into how the visual system parses images into separate components of shading and texture flow. We examined the extent to which perceived surface pigmentation depends on different stages of visual processing that differentiate potential texture edges from shading gradients caused by the interaction of shape and illumination. In Experiment 1, we found that textures with edge contours that followed (diffuse) shading flow significantly camouflaged the appearance of texture: Rotating textures out of alignment with shading flow increased the appearance of surface pigmentation. Orientation field modeling suggested that the more vivid pigmentation for rotated textures was not explainable by a general increase in the local contrast of texture edges per se. Rather, the texture rotation increased the visibility of the texture by causing adjacent luminance gradients—one corresponding to texture edges, the other corresponding to shading gradients—to have increasingly different orientations relative to each other. In Experiment 2, we found that the decline in perceived surface pigmentation with changes in texture orientation also occurred in spherical surfaces that generate both shading and illuminance flow. However, we found that the low-level orientation model accounted for less of the variation in perceived pigmentation for spherical surfaces than for textured planes. In Experiment 3 we found that the appearance of texture also depended on midlevel biases in the presumed direction of primary illumination. Violating the light source from above bias decreased perceived surface pigmentation, which coincided with a decrease in perceived global texture contrast (Experiment 4a) even when measured in an objective contrast detection task (Experiment 4b). 
The finding that perceived surface texture depends on the local orientation of texture edges relative to adjacent shading gradients supports the view that information specified by orientation flow is important for separating texture from shading flow (Beck & Prazdny, 1981; Ben-Shahar & Zucker, 2001; Todd et al., 2004; Anderson & Kim, 2009; Marlow et al., 2011). We previously showed that rotating specular reflections relative to the local orientation fields of diffuse shading transformed their appearance from glossy reflections to surface pigmentation (Anderson & Kim, 2009; Marlow et al., 2011; Kim et al., 2012). This suggests that the perception of both specular reflectance and surface pigmentation depend on the congruence of edge contours and local orientation fields in an image. In the current study, textures that were initially aligned with directions of minimal surface curvature produced reflectance edges that tended to co-align with the natural orientations of low-frequency shading contours and any potential high-frequency specular edges (Koenderink & van Doorn, 1980). The decline in appearance of these shading-congruent textures may be explained by their edges being partially inseparable from other possible types of surface contours, such as specular reflections and diffuse shading gradients. The edges of rotated textures are less confusable with other potential sources of luminance variation because they violate the generative constraints of diffuse and specular reflectance. This view is supported by the model data of Experiments 1 and 2, which found that rotated textures increase the variation in local orientation fields between texture edges and shading gradients. However, this orientation field model accounted for less of the variability in perceptual responses on the whole for spherical versus planar surfaces, which may have been caused by the multiscale variations in shading due to foreshortening of relief. 
In addition to low-level constraints, we identified a further dependence of perceived texture on the primary illumination direction. We found that the separation of texture from shading was modulated by the inferred orientation of the light field. We found that the appearance of surface texture declined with increasing angular offset in the orientation of the image of bumpy spheres illuminated from above (Experiment 3). This was a relatively strong effect when compared with the effect of rotating textures relative to shading flow alone, and suggests that texture edges are not fully separated from shading flow until midlevel processes estimate surface shape and illuminance flow. The effects of image rotation on perceived pigmentation did not occur with planar surfaces that generate a bas relief ambiguity. It would therefore seem that violation of the light source from above bias may increase the difficulty of computing shading and discounting the effects of illuminance flow on image structure. Consistent with this explanation, we found that decreasing the compatibility in orientation between the light field and the illumination from above bias by rotating images of bumpy spheres reduced the perceived contrast of texture across the surface (Experiments 4a and b). These findings together suggest that the visual system estimates the primary illumination direction when parsing luminance variations into separate components of shading and texture flow. 
The findings of the present study also add to understanding how the primary illumination direction may be computed from images of surfaces with relief. Researchers have previously used eigenvalues of the structure tensor for an image to estimate the direction of the primary light source (Koenderink & Pont, 2003; Varma & Zisserman, 2004). Varma and Zisserman (2004) considered the problem of computing the illumination direction from an image of a textured surface. They found that estimates of the illuminant's azimuth direction were not affected by the presence of isotropic texture, but could be significantly influenced by high-contrast anisotropic textures. Our findings with respect to the visibility of texture also depend on the texture's anisotropy. Texture and shading flows were most confusable when they shared the same anisotropy and were most perceptually separable when they flowed in orthogonal directions. 
The present study reveals that the visual system is sensitive to generative constraints on the structure of orientation fields generated by diffuse shading and pigmentation, as well as those relating to statistical properties of the light field. These findings are consistent with the view that the brain computes material properties from the structure of luminance variations constrained by different surface optics (Anderson & Kim, 2009; Kim & Anderson, 2010). It is of interest for future research to understand how physical constraints of other surface properties may influence the computations involved in this separation task, such as the presence of specular edges, transparency, and self-occluding boundaries. 
Acknowledgments
This project was funded by an Australian Research Council (ARC) Discovery Project awarded to B. Anderson, J. Kim, and R. Fleming and an ARC DORA Fellowship awarded to B. Anderson. 
Commercial relationships: none. 
Corresponding author: Juno Kim. 
Email: juno@psych.usyd.edu.au. 
Address: School of Optometry and Vision Science, University of New South Wales, NSW, Australia 
References
Anderson B. L. Kim J. (2009). Image statistics do not explain the perception of gloss and lightness. Journal of Vision, 9 (11): 10, 1–17, http://www.journalofvision.org/content/9/11/10, doi:10.1167/9.11.10. [PubMed] [Article]
Barrow H. G. Tenenbaum J. M. Hanson A. Riseman R. (1978). Recovering intrinsic scene characteristics from images. Computer vision systems (pp. 3–26). New York: Academic Press.
Beck J. Prazdny K. (1981). Highlights and the perception of glossiness. Perception & Psychophysics, 30, 407– 410. [CrossRef] [PubMed]
Ben-Shahar O. Zucker S. W. (2001). On the perceptual organization of texture and shading flows: From a geometrical model to coherence computation. In Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 1, 1048– 1055.
Debevec P. (2002). Image-based lighting. IEEE Computer Graphics and Applications, 22, 26– 34. [CrossRef]
Dror R. O. Willsky A. S. Adelson E. H. (2004). Statistical characterization of real-world illumination. Journal of Vision, 4 (9): 11, 821– 837, http://www.journalofvision.org/content/4/9/11, doi:10.1167/4.9.11. [PubMed] [Article] [PubMed]
Fleming R. W. Dror R. O. Adelson E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3 (5): 3, 347– 368, http://www.journalofvision.org/content/3/5/3, doi:10.1167/3.5.3. [PubMed] [Article] [PubMed]
Fleming R. W. Torralba A. Adelson E. H. (2004). Specular reflections and the perception of shape. Journal of Vision, 4 (9): 10, 798– 820, http://www.journalofvision.org/content/4/9/10, doi:10.1167/4.9.10. [PubMed] [Article] [PubMed]
Kim J. Anderson B. L. (2010). Image statistics and the perception of surface gloss and lightness. Journal of Vision, 10 (9): 3, 1– 17, http://www.journalofvision.org/content/10/9/3, doi:10.1167/10.9.3. [PubMed] [Article]
Kim J. Marlow P. Anderson B. (2011). The perception of gloss depends on highlight congruence with surface shading. Journal of Vision, 11 (9): 4, 1– 19, http://www.journalofvision.org/content/11/9/4, doi:10.1167/11.9.4. [PubMed] [Article]
Kim J. Marlow P. Anderson B. (2012). The dark side of gloss. Nature Neuroscience, 15 (11), 1590– 1595. [CrossRef] [PubMed]
Koenderink J. J. Pont S. C. (2003). Irradiation direction from texture. Journal of the Optical Society of America, 20 (10), 1875– 1882. [CrossRef] [PubMed]
Koenderink J. J. van Doorn A. J. (1980). Photometric invariants related to solid shape. Optica Acta, 27, 981– 996. [CrossRef]
Koenderink J. J. Van Doorn A. J. Pont S. C. (2007). Perception of illuminance flow in the case of anisotropic rough surfaces. Perception and Psychophysics, 69 (6), 895– 903. [CrossRef] [PubMed]
Langer M. S. Bulthoff H. H. (2001). A prior for local convexity in local shape from shading. Perception, 30, 403– 410. [CrossRef] [PubMed]
Liu B. Todd J. T. (2004). Perceptual biases in the interpretation of 3D shape from shading. Vision Research, 44 (18), 2135– 2145. [CrossRef] [PubMed]
Luckiesh M. (1916). Light and shade and their applications. D. New York: van Nostrand Company.
Luckiesh M. (1922). Visual illusions: Their causes, characteristics and applications. D. New York: van Nostrand Company.
Mamassian P. Goutcher R. (2001). Prior knowledge on the illumination position. Cognition, 81, B1– B9. [CrossRef] [PubMed]
Marlow P. Kim J. Anderson B. L. (2011). The role of brightness and orientation congruence in the perception of surface gloss. Journal of Vision, 11 (9): 16, 1– 12, http://www.journalofvision.org/content/11/9/16, doi:10.1167/11.9.16. [PubMed] [Article]
Metzger W. (1975). Gesetze des Sehens. Frankfurt am Main, Germany: Verlag Waldemar Kramer.
Mury A. A. Pont S. C. Koenderink J .J. (2009). Structure of light fields in natural scenes. Applied Optics, 48 (28), 5386– 5395. [CrossRef] [PubMed]
Norman J. F. Todd J. T. Orban G. A. (2004). Perception of three-dimensional shape from specular highlights, deformation of shading, and other types of visual information. Psychological Science, 15, 565– 570. [CrossRef] [PubMed]
Pont S. C. Koenderink J. J. (2003). Illuminance flow. Computer Analysis of Images and Patterns Lecture Notes in Computer Science, 2756, 90– 97.
Pont S. C. Koenderink J. J. (2005). Bidirectional texture contrast function. Proceedings of the European Conference on Computer Vision (ECCV 2005), 1– 5.
Ramachandran V. S. (1988). Perception of shape from shading. Nature, 331, 163– 166.
Thaler L. Todd J. T. Dijkstra T. M. H. (2007). The effects of phase on the perception of 3D shape from texture: Psychophysics and modeling. Vision Research, 47, 411– 427. [CrossRef] [PubMed]
Todd J. T. Oomes A. H. (2002). Generic and nongeneric conditions for the perception of surface shape from texture. Vision Research, 42, 837– 850. [CrossRef] [PubMed]
Todd J. T. Norman J. F. Mingolla E. (2004). Lightness constancy in the presence of specular highlights. Psychological Science, 15, 33– 39. [CrossRef] [PubMed]
Todd J. T. Thaler L. Dijkstra T. M. H. Koenderink J. J. Kappers A. M. L. (2007). The effects of viewing angle, camera angle, and sign of surface curvature on the perception of 3D shape from texture. Journal of Vision, 7 (12): 9, 1– 16, http://www.journalofvision.org/content/7/12/9, doi:10.1167/7.12.9. [PubMed] [Article] [PubMed]
Todd J. T. Thaler L. (2010). The perception of 3D shape from texture based on directional width gradients. Journal of Vision, 10 (5): 17, 1– 13, http://www.journalofvision.org/content/10/5/17, doi:10.1167/10.5.17. [PubMed] [Article] [PubMed]
Wagemans J. van Doorn A. J. Koenderink J. J. (2010). The shading cue in context. i-Perception, 1 (3), 159– 178. [CrossRef] [PubMed]
Ward G. J. (1992). Measuring and modeling anisotropic reflection. Computer Graphics, 26, 265– 272. [CrossRef]
van Doorn A. J. Koenderink J. J. Wagemans J. (2011). Light fields and shape from shading. Journal of Vision, 11 (3): 21, 1– 21, http://www.journalofvision.org/content/11/3/21, doi:10.1167/11.3.21. [PubMed] [Article]
Varma M. Zisserman A. (2004). Estimating illumination direction from textured images. Proceedings of Computer Vision and Pattern Recognition (CVPR) (IEEE, 2004), 179– 186.
Figure 1
 
Texture preserves the underlying shading gradients of surfaces. Upper row: A collimated light source from the upper right was used to render a simple convex surface with pure diffuse shading (A), diffuse and specular shading (B), and diffuse shading and texture (C). The diffuse reflectance was Lambertian and the specular reflectance was rendered using the Cook-Torrance model in Blender 3D. Lower row: shows the directions of the orientation gradients for the separate images generated using the method of Experiment 1 with an even symmetric Gabor filter of size 3 pixels. Insets show the polar orientation of brighter shading intensity for each local anisotropy in shading flow (black indicates local isotropy in shading). Note that the addition of texture generates a web of oriented edge contours that do not influence the direction of shading gradients on either side of their edges.
Figure 1
 
Texture preserves the underlying shading gradients of surfaces. Upper row: A collimated light source from the upper right was used to render a simple convex surface with pure diffuse shading (A), diffuse and specular shading (B), and diffuse shading and texture (C). The diffuse reflectance was Lambertian and the specular reflectance was rendered using the Cook-Torrance model in Blender 3D. Lower row: shows the directions of the orientation gradients for the separate images generated using the method of Experiment 1 with an even symmetric Gabor filter of size 3 pixels. Insets show the polar orientation of brighter shading intensity for each local anisotropy in shading flow (black indicates local isotropy in shading). Note that the addition of texture generates a web of oriented edge contours that do not influence the direction of shading gradients on either side of their edges.
Figure 2
 
Method used to create displacement maps and surface texture. (A) An initial image was generated by applying a directional derivative to a cloud noise pattern. The derived image was subdivided into quadrants to obtain four displacement maps, each with globally similar shading flow to the others (e.g., as outlined in red). A new noise basis with the same statistical properties was generated and subdivided twice and binarized to produce three scales of surface texture (50%, 100%, and 200%). (B) Displacement maps were used to create 3-D surface relief by increasing the height of the surface according to the intensity of the derived images. Binary texture maps were used to multiplicatively increase surface luminance by 20% in Blender 3D. Black texture regions did not alter surface luminance. To increase texture incongruence with shading (as shown in C), the texture maps were rotated by known angles (θ) before the local surface luminance was increased.
Figure 2
 
Method used to create displacement maps and surface texture. (A) An initial image was generated by applying a directional derivative to a cloud noise pattern. The derived image was subdivided into quadrants to obtain four displacement maps, each with globally similar shading flow to the others (e.g., as outlined in red). A new noise basis with the same statistical properties was generated and subdivided twice and binarized to produce three scales of surface texture (50%, 100%, and 200%). (B) Displacement maps were used to create 3-D surface relief by increasing the height of the surface according to the intensity of the derived images. Binary texture maps were used to multiplicatively increase surface luminance by 20% in Blender 3D. Black texture regions did not alter surface luminance. To increase texture incongruence with shading (as shown in C), the texture maps were rotated by known angles (θ) before the local surface luminance was increased.
Figure 3
 
Planar surfaces with relief and texture added in different orientations. Surface relief height was created using a directionally derived cloud texture. The surface geometry was textured with the binarized cloud texture. The orientation of the resulting texture was rotated over successive angles within the image plane (along rows: 0°, 15°, 45°, and 90°). Texture scales increased between image sets (down each column) for each texture orientation.
Figure 3
 
Planar surfaces with relief and texture added in different orientations. Surface relief height was created using a directionally derived cloud texture. The surface geometry was textured with the binarized cloud texture. The orientation of the resulting texture was rotated over successive angles within the image plane (along rows: 0°, 15°, 45°, and 90°). Texture scales increased between image sets (down each column) for each texture orientation.
Figure 4
 
Mean probability estimates of perceived pigmentation of surfaces with half-scale (A) and full-scale relief (B). Images in upper sections are sample renderings of matte surfaces with textures at 0° and 90° of offset. The mean probability that a surface was selected as more pigmented is plotted below as a function of physical texture offset (in degrees) relative to diffuse shading. Black points and lines show data for textures with high-frequency texture. Red points and lines show data for textures same-frequency texture, whereas blue points and lines show data for textures with low-frequency relief. Error bars indicate standard errors of the mean.
Figure 4
 
Mean probability estimates of perceived pigmentation of surfaces with half-scale (A) and full-scale relief (B). Images in upper sections are sample renderings of matte surfaces with textures at 0° and 90° of offset. The mean probability that a surface was selected as more pigmented is plotted below as a function of physical texture offset (in degrees) relative to diffuse shading. Black points and lines show data for textures with high-frequency texture. Red points and lines show data for textures same-frequency texture, whereas blue points and lines show data for textures with low-frequency relief. Error bars indicate standard errors of the mean.
Figure 5
 
Model output from edge contrast measurements. (A) Transformed images showing the root-mean-squared (RMS) magnitude of shading gradients across the front region of a matte surface with rotated texture. Intensity corresponds to the omnidirectional strength of the shading gradient at each pixel location. Different threshold values were set to shallower shading gradients. (B) Mean edge contrast is plotted as a function of texture offset (in degrees) for the three threshold levels of identified edge. Standard deviations outlined in broken lines were determined from the pool of images rendered for repeat stimulus conditions.
Figure 5
 
Model output from edge contrast measurements. (A) Transformed images showing the root-mean-squared (RMS) magnitude of shading gradients across the front region of a matte surface with rotated texture. Intensity corresponds to the omnidirectional strength of the shading gradient at each pixel location. Different threshold values were set to shallower shading gradients. (B) Mean edge contrast is plotted as a function of texture offset (in degrees) for the three threshold levels of identified edge. Standard deviations outlined in broken lines were determined from the pool of images rendered for repeat stimulus conditions.
Figure 6
 
Orientation field responses to images of matte surfaces with texture added in different orientations. (A) Raw images of relief with added texture were filtered using 2-D horizontal and vertical even symmetric Gabor filters that were single cycle and one of four different sizes (3, 7, 13, or 25 pixels). The vector specifying the orientation of the maximum shading gradient was determined in polar coordinates (colored image). (B) Raw images of surfaces with texture applied in different orientations (angular value shown) and their orientation-field transforms (colored). Greater abundance of local variation in orientation is coded by increasing intensity within the central square regions of interest.
Figure 6
 
Orientation field responses to images of matte surfaces with texture added in different orientations. (A) Raw images of relief with added texture were filtered using 2-D horizontal and vertical even symmetric Gabor filters that were single cycle and one of four different sizes (3, 7, 13, or 25 pixels). The vector specifying the orientation of the maximum shading gradient was determined in polar coordinates (colored image). (B) Raw images of surfaces with texture applied in different orientations (angular value shown) and their orientation-field transforms (colored). Greater abundance of local variation in orientation is coded by increasing intensity within the central square regions of interest.
Figure 7
 
Model output of orientation field responses to images of surfaces with added texture. Plots show mean variation in polar orientation plotted as a function of global texture offset (degrees). Note how texture rotations increase the variation between the orientation gradients for surfaces with different relief. Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 7
 
Model output of orientation field responses to images of surfaces with added texture. Plots show mean variation in polar orientation plotted as a function of global texture offset (degrees). Note how texture rotations increase the variation between the orientation gradients for surfaces with different relief. Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 8
 
Matte renderings of spherical surfaces with relief and added texture. Surface relief height was created using the directional displacement maps of Experiment 1. The surface geometry was pigmented by applying the same binarized textures used in the previous experiment. Column values indicate the orientation of the texture around the viewing axis in degrees. Texture scale varies between the rows as a percentage of texture scale used to create surface geometry.
Figure 8
 
Matte renderings of spherical surfaces with relief and added texture. Surface relief height was created using the directional displacement maps of Experiment 1. The surface geometry was pigmented by applying the same binarized textures used in the previous experiment. Column values indicate the orientation of the texture around the viewing axis in degrees. Texture scale varies between the rows as a percentage of texture scale used to create surface geometry.
Figure 9
 
Mean probability estimates of perceived pigmentation on bumpy spheres. Images show example surfaces with 90° texture rotation. Perceived strength of apparent texture of the surfaces is plotted as a function of physical texture offset relative to diffuse shading. Black points and lines show data for surfaces with high-frequency texture. Red points and lines show data for surfaces with textures that had the same scale as relief, whereas blue points and lines show data for surfaces with half-frequency textures. Error bars indicate standard errors of the mean.
Figure 9
 
Mean probability estimates of perceived pigmentation on bumpy spheres. Images show example surfaces with 90° texture rotation. Perceived strength of apparent texture of the surfaces is plotted as a function of physical texture offset relative to diffuse shading. Black points and lines show data for surfaces with high-frequency texture. Red points and lines show data for surfaces with textures that had the same scale as relief, whereas blue points and lines show data for surfaces with half-frequency textures. Error bars indicate standard errors of the mean.
Figure 10
 
Model output of orientation field responses to images of bumpy spheres with different texture scales. Plot shows mean variations in shading orientation plotted as a function of global texture offset (degrees). Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Across different size filters, there is a general increase in shading variability with increasing texture offset relative to diffuse shading. However, note how the relationship declines with increasing texture scales (i.e., lower texture frequencies), roughly following the decline observed in the psychophysical data of in Figure 9. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 10
 
Model output of orientation field responses to images of bumpy spheres with different texture scales. Plot shows mean variations in shading orientation plotted as a function of global texture offset (degrees). Separate curves show data obtained with different sizes (in pixels) of balanced single-cycle orientated Gabors. Across different size filters, there is a general increase in shading variability with increasing texture offset relative to diffuse shading. However, note how the relationship declines with increasing texture scales (i.e., lower texture frequencies), roughly following the decline observed in the psychophysical data of in Figure 9. Surface images were 512 × 512 pixels. Error bands are standard errors computed across a family of images in each category within the central surface patch.
Figure 11
 
Effect of image orientation on perceived texture of planar surfaces with relief. Note that the same region of the upright surface indicated by the white arrow (left) appears sunken, compared with the same region in the inverted surface (right), which now appears raised.
Figure 11
 
Effect of image orientation on perceived texture of planar surfaces with relief. Note that the same region of the upright surface indicated by the white arrow (left) appears sunken, compared with the same region in the inverted surface (right), which now appears raised.
Figure 12
 
Different texture orientations on matte bumpy planes and spheres. (A) Upright images of planar and spherical surfaces with relief and texture oriented at 0° (left in pair) and 90° (right in pair). (B) The same images in A, but rotated 90° clockwise. (C) The same images in A, but rotated 180° (i.e., completely inverted).
Figure 12
 
Different texture orientations on matte bumpy planes and spheres. (A) Upright images of planar and spherical surfaces with relief and texture oriented at 0° (left in pair) and 90° (right in pair). (B) The same images in A, but rotated 90° clockwise. (C) The same images in A, but rotated 180° (i.e., completely inverted).
Figure 13
 
Effect of image orientation on perceived pigmentation of planar surfaces (top) and bumpy spheres (bottom). Solid points show estimates for perceived pigmentation of aligned textures (0°), whereas open points show estimates of rotated textures (90°). Different colors are used to show data obtained with high-frequency texture (A and C in black points and lines) or same-frequency textures (B and D in red points and lines) as the underlying relief. Error bars indicate standard errors of the mean.
Figure 13
 
Effect of image orientation on perceived pigmentation of planar surfaces (top) and bumpy spheres (bottom). Solid points show estimates for perceived pigmentation of aligned textures (0°), whereas open points show estimates of rotated textures (90°). Different colors are used to show data obtained with high-frequency texture (A and C in black points and lines) or same-frequency textures (B and D in red points and lines) as the underlying relief. Error bars indicate standard errors of the mean.
Figure 14
 
Method used to vary texture contrast regionally throughout the image. Upper row: the “Original” texture used in the previous experiment was attenuated by subtracting the luminance values of a cloud “Noise” texture from the original. The amplitude of the noise pattern was multiplicatively scaled (0.0 to 1.0) to vary the level of attenuation. Middle row: Upright images of surfaces with increasing regional attenuation (0.0, 0.5, 1.0). Lower row: Inverted images of surfaces with the same attenuation levels.
Figure 14
 
Method used to vary texture contrast regionally throughout the image. Upper row: the “Original” texture used in the previous experiment was attenuated by subtracting the luminance values of a cloud “Noise” texture from the original. The amplitude of the noise pattern was multiplicatively scaled (0.0 to 1.0) to vary the level of attenuation. Middle row: Upright images of surfaces with increasing regional attenuation (0.0, 0.5, 1.0). Lower row: Inverted images of surfaces with the same attenuation levels.
Figure 15
 
Perceived global texture contrast decreases with image orientation. Increasing the orientation of the image between 0° and 180° reduces perceived image contrast. Separate colors indicate data obtained with different levels of texture contrast (low, medium, high). Data traces only show comparisons made within each contrast level. Error bars are standard errors of the mean.
Figure 15
 
Perceived global texture contrast decreases with image orientation. Increasing the orientation of the image between 0° and 180° reduces perceived image contrast. Separate colors indicate data obtained with different levels of texture contrast (low, medium, high). Data traces only show comparisons made within each contrast level. Error bars are standard errors of the mean.
Figure 16
 
Increasing the attenuation level of texture detail monotonically reduces the global appearance of surface texture. Black points and lines show data obtained when comparing upright images of spherical surfaces against an upright reference image with an attenuation level of 0.5. Red points and lines show data obtained when comparing upright images against an inverted version of the same reference image. Error bars show standard errors of the mean.
Figure 16
 
Increasing the attenuation level of texture detail monotonically reduces the global appearance of surface texture. Black points and lines show data obtained when comparing upright images of spherical surfaces against an upright reference image with an attenuation level of 0.5. Red points and lines show data obtained when comparing upright images against an inverted version of the same reference image. Error bars show standard errors of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×