Open Access
Article  |   March 2017
Translucency and the perception of shape
Author Affiliations
  • Nahian S. Chowdhury
    School of Optometry and Vision Science, University of New South Wales, Kensington, New South Wales, Australia
  • Phillip J. Marlow
    School of Psychology, University of Sydney, Australia
  • Juno Kim
    School of Optometry and Vision Science, University of New South Wales, Kensington, New South Wales, Australia
Journal of Vision March 2017, Vol.17, 17. doi:https://doi.org/10.1167/17.3.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nahian S. Chowdhury, Phillip J. Marlow, Juno Kim; Translucency and the perception of shape. Journal of Vision 2017;17(3):17. https://doi.org/10.1167/17.3.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies have shown that the perceived three-dimensional (3D) shape of objects depends on their material composition. The majority of this work has focused on glossy, flat-matte, or velvety materials. Here, we studied perceived 3D shape of translucent materials. We manipulated the spatial frequency of surface relief perturbations of translucent and opaque objects. Observers indicated which of two surfaces appeared to have more bumps. They also judged local surface orientation using gauge probe figures. We found that translucent surfaces appeared to have fewer bumps than opaque surfaces with the same 3D shape (Experiment 1), particularly when self-occluding contours were hidden from view (Experiment 2). We also found that perceived local curvature was underestimated for translucent objects relative to opaque objects, and that estimates of perceived local surface orientation were similarly correlated with luminance for images of both opaque and translucent objects (Experiment 3). These findings suggest that the perceived mesoscopic shape of completely matte translucent objects can be underestimated due to a decline in the steepness of luminance gradients relative to those of opaque objects.

Introduction
We readily experience the three-dimensional (3D) structure of objects from the two-dimensional (2D) images they project on the retina. This experience allows us to differentiate between objects that can vary in relief from smooth to rough. Perceived shape depends on multiple forms of image-based information, including diffuse shading (Todd & Mingolla, 1983), texture gradients (Georgieva, Todd, Peeters, & Orban, 2008), specular reflections (Fleming, Dror, & Adelson, 2003), the structure of bounding contours (Knill & Kersten, 1991), and binocular disparity and motion (Tittle & Braunstein, 1993). This literature has studied perceived shape of completely opaque objects, but has not investigated how shape might be derived from objects that are translucent (or transparent). Here, we investigate the perception of shape in translucent objects and its dependence on shading cues. 
The recovery of shape from shading is a classic problem in psychology and computer vision. The majority of work has studied the perception of shape using Lambertian or matte surfaces that have an analytically simple reflectance function, whereby the light that reaches the surface is scattered equally in all directions (e.g., Erens, Kappers, & Koenderink, 1993; Horn, 1970; Kleffner & Ramachandran, 1992; Ramachandran, 1988; Todd & Mingolla, 1983). The luminance, I, of Lambertian surfaces varies as a cosine function of the angle, α, between the outward pointing surface normal, N, and the vector oriented toward the light source, L, such that I = cosα = N · L. Although the inverse cosine function of luminance recovers the angle between the light source and the surface normal (i.e., α = cos−1I), there is no simple relationship between that angle, α, and the surface normal, N, that the visual system can integrate to recover 3D shape. Luminance only varies with the elevation of the surface normal from the direction of the illuminant and provides no information to disambiguate the azimuth orientation of surface normals (see Figure 1). This theoretical challenge has inspired many to study how accurately the human visual system recovers shape from shading. An early study by Todd and Mingolla (1983) found that observers generally judged the curvature of matte cylindrical surfaces as lower than ground truth. Thus, observers seem to distort the representation of the 3D shape of Lambertian surfaces in the form of a systematic underestimation of surface curvature or depth, as confirmed by further studies (e.g. Curran & Johnston, 1996; Mingolla & Todd, 1986; Wijntjes, Doerschner, Kucukoglu, & Pont, 2012). 
Figure 1
 
A sphere with a Lambertian surface illuminated from above. Luminance (I) varies with the cosine of the angle α between the outwards facing surface normal (N) and the lighting direction (L). However, this angle represents the elevation component (in black) of the surface normal only, while the azimuth component (in red) does not vary with luminance. As such, there are many azimuth values for any given elevation value, suggesting that the intensity of light is insufficient in disambiguating the exact orientation of the surface normal.
Figure 1
 
A sphere with a Lambertian surface illuminated from above. Luminance (I) varies with the cosine of the angle α between the outwards facing surface normal (N) and the lighting direction (L). However, this angle represents the elevation component (in black) of the surface normal only, while the azimuth component (in red) does not vary with luminance. As such, there are many azimuth values for any given elevation value, suggesting that the intensity of light is insufficient in disambiguating the exact orientation of the surface normal.
The study of shape from shading is further complicated when considering the case of opaque surfaces that exhibit specular reflectance, which can be modeled using a bidirectional reflectance distribution function (BRDF; see Nicodemus, 1965). Whereas Lambertian shading depends solely on the orientation of surface normals relative to the light source, specular reflectance generates shading that further depends on viewing direction. Specular shading depends on the angle of reflection towards the observer (relative to the local normal) that is equal to the angle of the incident ray from the light field (relative to the same local normal). 
Evidence suggests that perceived shape can be influenced by the presence of specular reflections, but only when the structure of the light field is complex. Some studies that have used highly simplified light fields (i.e., a point or collimated light source) have found no differences in perceived shape between surfaces with and without specular reflectance (Mingolla & Todd, 1986; Nefs, Koenderink, & Kappers, 2006). However, Ho, Landy, and Maloney (2008) found specularity increased the perceived bumpiness for patches of intersecting ellipsoids illuminated under an area light. Also, Mooney and Anderson (2014) used objects illuminated in natural light fields to show that perceived relief height can be overestimated when generating specular reflections. Thus, there is evidence that specularity can significantly influence perceived shape in complex lighting environments. 
Further research suggests that even the relief of partially specular opaque surfaces can be underestimated when containing complex microrelief. One such surface is velvet, which generates a reflectance profile that depends on the interaction of light with tightly-woven bundles of filaments slanted at approximately 40° relative to the local surface normal (Lu, Koenderink, & Kappers, 1998; Ashikmin, Premože, & Shirley, 2000). Wijntjes et al. (2012) found that observers underestimate the shape of velvet surfaces relative to purely flat-matte surfaces. They found perceptual “flattening” of a velvet sphere depended on the similarity in their luminance distribution and isophote structure compared with flattened matte discs. Although the reflectance distribution functions are different between velvet and flat-matte surfaces, the same mode of shape-from-shading estimation appears to be used for the perception of surface shape. It is possible that the same shape-from-shading computations are also used generically for the perception of shape for nonopaque or translucent surfaces that have very different reflectance distribution functions. 
Translucency is a very common nonopaque material property inherent in human skin, fruits, waxes, and milk. Light penetrates translucent surfaces where it then scatters throughout the object, re-emerging from distal surface regions. This process of light transport through nonopaque objects is known as subsurface scattering (Jensen & Buhler, 2001) and is modeled approximately by a bidirectional scattering-surface reflectance distribution function (BSSRDF; see Figure 2). These models have parameters that simulate the appearance of translucent objects, including the refractive index of the surface material, the scattering and absorption coefficients of particles in the translucent medium, and the phase function of these particles, which determines the extent to which light scatters forwards, backwards or equally in all directions (Gkioulekas et al., 2013; Jensen & Buhler, 2001). 
Figure 2
 
Light is refracted by thick translucent materials (rather than just reflected) and upon impact with particles suspended in the translucent medium, it may either scatter backward, forward, or equally in all directions depending on the phase function of these particles.
Figure 2
 
Light is refracted by thick translucent materials (rather than just reflected) and upon impact with particles suspended in the translucent medium, it may either scatter backward, forward, or equally in all directions depending on the phase function of these particles.
The sheer complexity in the optics underlying the generation of translucent luminance gradients could potentially lead to the misperception of 3D shape in translucent objects. Figure 3 shows photographs of two identical wax candles: one opaque and the other translucent. The opaque candle was constructed by covering the wax candle in a few coats of a low-sheen acrylic paint. Both objects are identically illuminated from the right and have identical distributions of concave grooves etched into their front faces. The translucent grooves appear to have lower luminance contrast and lower depth than the opaque grooves, which could influence the perception of their mesoscopic shape profiles. The aim of the experiments reported as follows was to examine how perceived mesoscopic shape differs between opaque and translucent objects. 
Figure 3
 
An opaque (left) and translucent (right) candle containing identical concave “grooves.” Note that the grooves appear “washed out” and shallower on the translucent than opaque candle. The opaque version was constructed by painting the translucent candle with white flat-matte acrylic paint.
Figure 3
 
An opaque (left) and translucent (right) candle containing identical concave “grooves.” Note that the grooves appear “washed out” and shallower on the translucent than opaque candle. The opaque version was constructed by painting the translucent candle with white flat-matte acrylic paint.
Experiment 1
Figure 3 suggests that bumpy translucent surfaces appear to have fewer bumps than opaque surfaces with the same 3D shape because translucent bumps generate less contrast than opaque bumps. In order to test this, we constructed five perturbed spheres with different mesoscopic 3D shapes and rendered each as either a translucent or opaque material. The mesoscopic shape of the spheres was generated by shifting the radial distance of the sphere's vertices according to the intensity of a cloud noise texture. We varied the noise depth of the cloud texture to increase the spatial frequency of the relief perturbations over the surface of the sphere, which increased the number of mesoscale bumps. Observers viewed every pair of surfaces and judged which had more bumps. If perceived shape depends on surface opacity, then translucent objects should appear to have fewer bumps than opaque objects. 
Methods
Observers
A total of six adult observers (including authors NC and JK) participated in the experiment. All observers had normal or corrected-to-normal central visual acuity. The procedures were conducted in accordance with the Declaration of Helsinki and were approved by the Human Research Ethics Advisory Panel at The University of New South Wales. 
Stimuli
Surfaces were generated by deforming a geodesic sphere with 10,254 vertices using a cloud-noise displacement map created in Blender 3D that had a base noise level of 0.75. We parametrically increased the noise depth setting in Blender from 0 to 4 to increase the spatial frequency content of depth perturbations. Increasing this noise depth of the displacement map directly affected mesoscopic surface shape properties, including the number of bumps and valleys on the surface and the rate of surface curvature. The five levels of noise depth corresponded to approximately 0 ∼ 0.02 bumps/°, 1 ∼ 0.04 bumps/degree, 2 ∼ 0.06 bumps/°, 3 ∼ 0.08 bumps/°, and 4 ∼ 0.10 bumps/° (as determined by the average number of maxima in the second derivative of a transverse section of the surface's central bounding contour over a 360° range). 
Surfaces were rendered as eight-bit grayscale images with a resolution of 400 × 400 pixels. Rendering was performed using the Eucalyptus Grove light field, and the resulting images for each condition are as shown in Figure 4. This light field was suitable because its average anisotropic illumination direction was oriented approximately from above (Kim, Marlow, & Anderson, 2011). 
Figure 4
 
Images of the stimuli used in Experiment 1. The top row contains the opaque stimuli and bottom row contains the translucent stimuli. Each row consists of five levels of noise depth, increasing from 0 to 4 (generating different spatial-frequency bumps of 0.02, 0.04, 0.06, 0.08, and 0.10 bumps/°, respectively).
Figure 4
 
Images of the stimuli used in Experiment 1. The top row contains the opaque stimuli and bottom row contains the translucent stimuli. Each row consists of five levels of noise depth, increasing from 0 to 4 (generating different spatial-frequency bumps of 0.02, 0.04, 0.06, 0.08, and 0.10 bumps/°, respectively).
Procedure
We used the two alternative forced-choice (2AFC) method to obtain psychophysical measures of the perceived number of bumps. Observers were informed there would be two images on the computer screen: one on the left-hand side and another on the right-hand side. These images were viewed freely. Observers were instructed to “use the left and right arrow keys to indicate which of the images appeared to contain more bumps across the surface depicted.” For each of the two images, participants were told to compare bumps across the same regions of image space, and to make their decision based on the surface as a whole and not just one local region. Images were presented for 5 s on each trial (to ensure participants had enough time to observe the bumps), after which time, the images disappeared, prompting a response from the observer. Observers were still able to indicate their choice during the 5 s presentation. Following a response, the screen was blanked for 3 s prior to the start of the subsequent trial. 
Each trial contained two images of a surface which was either translucent or opaque and had one of the five 3D shapes. Each pair of surfaces was presented in the same orientation, but the orientation of the images was randomized between 0 and 360° across trials. Each of the unique image pairs was presented twice (two repeats) in a given test session to counterbalance presentations across the display. Thus, there were 90 trials in total. The order of the trials was a different random perturbation for each observer. The total time taken to complete this task ranged up to approximately 10 minutes. 
Statistical analysis
For each observer, we computed the probability that each image of a particular level of noise depth and opacity was selected as having more bumps, which was determined by dividing the number of times it was selected by the number of times is was presented. Thus, if the probability was 0.5, it means that the image was chosen as the one with more bumps 9/18 times. A two-way repeated-measures analysis of variance (ANOVA) was used to test for main and interaction effects on the probability of perceiving more bumps across conditions (2-level opacity × 5-level noise depth). 
Results
Figure 5 shows the probability that each image was selected as having more bumps, plotted as a function of the five levels of noise depth. The red data points represent translucent surfaces and the black data points represent opaque surfaces. A two-way ANOVA found opaque objects appeared to have significantly more bumps than translucent objects (main effect of opacity averaged over noise depth, F(1, 5) = 45.62, p = 0.001. There was also a significant main effect of noise depth, such that the probability an image was selected as having more bumps increased monotonically with noise depth, averaged over opacity, F(4, 20) = 111.3, p < 0.001. There was also a significant interaction between the main effects of opacity and noise depth, F(4, 20) = 3.217, p < 0.05), which indicates that the highest levels of noise depth produced the largest differences in perceived mesoscopic shape between the opaque and translucent surfaces. The significance of all the results reported was identical when the primary author was excluded from the analysis. 
Figure 5
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth. Different colors are used to plot data obtained with translucent (red) and opaque conditions (black).
Figure 5
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth. Different colors are used to plot data obtained with translucent (red) and opaque conditions (black).
Discussion
Our results showed that the perceived number of bumps increased monotonically with increases in noise depth for both translucent and opaque objects. We also found that subsurface scattering had a larger effect on the perceived number of bumps at higher noise depths than at smoother spatial scales (significant interaction effect). Overall, these results suggest that the perception of mesoscopic shape is strongly influenced by surface opacity. However, it is possible that these results underestimate the differences in perceived 3D shape between opaque and translucent surfaces because observers could have determined which surface had more bumps using the shape of bounding contour and not just the pattern of luminance gradients. Hence, in the next experiment, we consider the effect of the bounding contour on perceived shape judgments. 
Experiment 2
Despite the differences in the perceived number of bumps between opaque and translucent surfaces (Experiment 1), the bounding contours of the objects were identical at each level of noise depth. The bounding contour has previously been shown to serve as a strong visual cue to internal surface shape (Mamassian & Kersten, 1996). Although the interior shape of the object appeared different between translucent and opaque images, the similar bounding contours across these conditions may also have influenced judgment of the number of bumps. 
In Experiment 2, we tested whether global shape information provided by the bounding contour influenced observer judgments of the number of bumps. If perceived mesoscopic shape depends on the bounding contour, then eliminating the bounding contour from the image should increase the difference in the perceived number of bumps between opaque and translucent objects. This was expected because eliminating the bounding contour would force observers to rely only on local shading gradients to estimate shape. 
Methods
Observers
A total of five adult observers (who were the same observers from Experiment 1) participated in this experiment. All observers had normal or corrected-to-normal vision. The procedures were conducted in accordance with the Declaration of Helsinki, and were approved by the Human Research Ethics Advisory Panel at The University of New South Wales. 
Stimuli
Experiment 2 consisted of an additional set of images, which were the same as those used in Experiment 1. However, each of the new images contained an occluder, which masked the bounding contour. Figure 6 shows examples of the stimuli used in the occluder condition. 
Figure 6
 
Examples of the stimuli used in the occluder condition of Experiment 2. The surfaces were the same opaque (A) and translucent (B) surfaces as those used in Experiment 1, except that the bounding contour was no longer visible.
Figure 6
 
Examples of the stimuli used in the occluder condition of Experiment 2. The surfaces were the same opaque (A) and translucent (B) surfaces as those used in Experiment 1, except that the bounding contour was no longer visible.
Procedure
Experiment 2 followed the same procedure as Experiment 1, except that each observer was required to perform the experiment twice, with and without the presence of the occluder. This resulted in a total of 180 trials. The order of conditional blocks of trials with or without the occluder was randomly determined for each observer. Experiment 1 and 2 used the same observers. Three of the five observers performed Experiment 2 before Experiment 1. Only one was an author (NC). 
Statistical analysis
For the dependent variable (probability of perceiving more bumps), a (2) × (5) × (2) (occlusion × noise depth × opacity), a three-way repeated-measures ANOVA was conducted. 
Results
Figure 7 shows the probability that an image was selected as having more bumps plotted as a function of the five levels of noise depth, across translucent (red data points) and opaque objects (black points). Separate sets of axes show data with and without an occluder present. Visible inspection reveals a larger difference in separation between the curves for the perceived number of bumps of translucent and opaque surfaces with the occluder, compared with the condition without the occluder. 
Figure 7
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth, across translucent and opaque conditions, with and without an occluder present.
Figure 7
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth, across translucent and opaque conditions, with and without an occluder present.
There was a significant main effect of opacity, such that participants were more likely to perceive opaque objects as having more bumps than translucent objects, averaged over occlusion and noise depth, F(1, 4) = 845.01, p < 0.001. There was a significant main effect of noise depth, such that the perceived number of bumps increased monotonically with noise depth, F(4,16) = 185.56, p < 0.001. There was no significant main effect of occlusion due to the nature of the paired-comparisons being repeated across occlusion conditions, F(1, 4) = 2.38, p = 0.198. However, there was a significant interaction between occlusion and opacity, such that the difference in the perceived number of bumps between opaque and translucent objects was greater with an occluder, compared with no occluder, F(1, 4) = 47.59, p = 0.002. 
There was a significant interaction between occlusion and noise depth, such that the increase in the perceived number of bumps across levels of noise depth was greater without an occluder than with an occluder, F(4, 16) = 25.05, p < 0.001. There was also a significant interaction between opacity and noise depth, such that the increase in the perceived number of bumps across levels of noise depth was greater for opaque objects, compared with translucent objects, F(4, 16) = 21.94, p < 0.001. Finally, there was no significant three-way interaction, indicating overall that higher levels of noise depth produced the larger differences in apparent mesoscopic shape for both translucent and opaque surfaces, irrespective of whether self-occluding edges were visible, F(4, 16) = 1.69, p = 0.20. The significance of all the results reported was the same when the primary author was excluded from the analysis. 
Discussion
Experiment 2 replicated the effect of translucency on mesoscopic shape perception seen in Experiment 1; translucent objects were perceived as having less bumps than opaque objects with the same physical 3D shape. Experiment 2 also replicated the stronger effect of opacity on perceived shape for surfaces with higher noise depths than for those with lower noise depths. However, we further found greater differences in the apparent frequency of bumps between translucent and opaque surfaces when the self-occluding edges of the surfaces were hidden from view. Indeed, without self-occluding edges, observers are unable to discriminate between the highest and lowest bump frequencies when the objects are rendered translucent. These results provide additional evidence that the bounding contour of an object serves as a cue to mesoscopic shape (e.g., Knill & Kersten, 1991; Ramachandran, 1988; Todorović, 2014). 
It is possible that the interaction effect between noise depth and opacity is caused by a “washing out” effect, whereby the steepness of the shading gradients is lower for images of the translucent object than for the opaque object. Increasing noise depth increases surface curvature resulting in sharper shading gradients for opaque surfaces. However, increasing noise depth for translucent surfaces can have the opposite effect and reduce the sharpness of luminance gradients. This effect appears to be caused by subsurface scattering reducing high-spatial frequency contrast, and this decline in contrast is strongest for the small and thin bumps generated at higher noise depths. 
Figure 8 plots the horizontal luminance profile through the images of surfaces in each of the 10 conditions (noise depth × opacity). Note that the luminance profiles vary more locally with increases in noise depth for the opaque surface. Also note how these variations in luminance are absent in the luminance profiles for images of translucent surfaces that increase in noise depth. A two-way ANOVA found significant main effects of both noise depth, F(4, 16) = 178.4, p < 0.00001, and opacity, F(1, 4) = 5191, p < 0.000001) on the change in luminance (steepness of local gradients) across the image. These results suggest that the effects of opacity on perceived shape depends on the structure of luminance gradients. In the next experiment, we consider the effect this apparent smoothing of luminance gradients has on the perception of local surface orientation. 
Figure 8
 
Horizontal luminance profiles for central image regions of the opaque object (left) and translucent object (right). Different colors show the luminance at each pixel location for different noise depths. The scale in the upper right is 50 pixels horizontal and 0.3 intensity. Note that traces have been vertically displaced for clarity.
Figure 8
 
Horizontal luminance profiles for central image regions of the opaque object (left) and translucent object (right). Different colors show the luminance at each pixel location for different noise depths. The scale in the upper right is 50 pixels horizontal and 0.3 intensity. Note that traces have been vertically displaced for clarity.
Experiment 3
To further explore the relationship between perceived 3D shape and luminance gradients of translucent and opaque surfaces, Experiment 3 measured the perception of local surface orientation in four of the surface images from Experiment 1 and 2. The four surfaces had either low or high noise depth and opaque or translucent material properties. We also presented images with self-occluding edges either visible or occluded from view. Observers adjusted the orientation of gauge probe figures so that the probe appeared to lie on the tangent plane of the surface at the location of the probe (Koenderink, van Doorn, & Kappers, 1992). We hypothesized that the perceived differences in local orientation would be lower for translucent as opposed to opaque surfaces. 
We were primarily interested in the relationship between image intensity and perceived 3D surface orientation. The perceived surface orientations of the opaque surfaces were expected to exhibit an approximately cosine relationship with image intensity; brightest regions of the opaque surfaces should appear to face the primary lighting direction, and increasingly darker regions of these surfaces should appear to have increasingly larger angular separations from the primary lighting direction. If the relationship between image intensity and perceived surface orientation is similar between the translucent and opaque surfaces, then this would suggest that the misperception of 3D shape for the translucent surfaces is due in part to the visual system treating translucent image gradients in the same way as opaque shading gradients. 
Methods
Observers
Four adult observers (author NC and three naive) participated in the experiment. All observers had normal or corrected-to-normal vision. The procedures were conducted in accordance with the Declaration of Helsinki, and were approved by the Human Research Ethics Advisory Panel at The University of New South Wales. 
Stimuli
We tested two of the surface geometries from Experiment 1 and 2: noise depth 0 and noise depth 3. We did not test the highest noise depth from Experiment 1 because its surface orientation varied too rapidly over the image to measure using gauge probes. Images were presented on the display with or without the self-occluding contour. This resulted in eight images: a translucent and opaque surface with high or low noise depth presented with self-occluding contours either visible or cropped as in Experiment 2. We rendered 4,000 × 4,000 pixel images rather than 400 × 400 pixels to maximize the quality when images were enlarged by a factor of 2 for measuring perceived surface orientation. 
The gauge probe experiment was run using custom software written in MATLAB r2014b. A sample observer's view using this psychophysical software is shown in Figure 9A. The observer used the mouse to control the orientation of a red gauge probe with a circular base (diameter = 35 pixels, visual angle = 0.6°) and vertical line normal to the base (length = 18 pixels, line thickness = 3.5 pixels) overlaid on one location of the surface. The location of all gauge probes is shown in Figure 9B with and without self-occluding edges. 
Figure 9
 
(A) Example of a trial in Experiment 4. The red circle with the line segment (inset on the bottom right) shows the gauge probe which could be rotated in azimuth and elevation to match the orientation of the surface at the probe point. (B) The 11 × 11 probe matrix, both with and without an occluder, from which surface orientation was measured.
Figure 9
 
(A) Example of a trial in Experiment 4. The red circle with the line segment (inset on the bottom right) shows the gauge probe which could be rotated in azimuth and elevation to match the orientation of the surface at the probe point. (B) The 11 × 11 probe matrix, both with and without an occluder, from which surface orientation was measured.
Procedure
For each image, the observer was instructed to set the orientation of the red gauge probes so that the circular base of the probe would appear to lie flat on the surface at a tangent plane, and that the vertical line segment of the probe represented the surface normal. The distance of the mouse cursor from the center of the screen determined the slant of the gauge probe figure relative to the observer's viewing direction, and the angle of the mouse cursor relative to the screen center determined the tilt of the gauge probe (The mouse cursor was hidden from view). Observers hit the space bar when they were satisfied with the position of the gauge probe, advancing them on to the next local patch of the surface (thus one-gauge probe was presented at a time). For each of the four images, there were 121 settings in total, which were located within an 11 × 11 matrix on the inner region of the surface (see Figure 9). Each observer performed the settings twice to improve the reliability of our measures. The order of trials was a different random permutation for each observer. 
Statistical analyses
We correlated the luminance at each location tested with its perceived surface orientation to assess whether perceived surface orientation varied with image intensity in a manner consistent with Lambertian reflectance. Calibrated pixel intensities were measured with a handheld Minolta light meter to extract luminance at each probe point (in cd/m2). Perceived surface orientation was represented in polar coordinates using three variables: the upper pole of the polar coordinate system, which is a spherical direction vector representing a possible illumination direction vector; the angular separation of surface normals from the pole, which is a linear variable in the range 0° to 180°; and the tilt of surface normals relative to the pole. We measured the correlation between luminance and angular separation for many (300) different directions of the pole to sample the spherical space of potential illumination directions. We plotted luminance as a function of angular separation relative to the pole that exhibited the highest negative correlation. 
To further investigate differences in perceived shape across the eight conditions, we also computed the average local angular separation between adjacent surface normals (horizontally and vertically), for each observer, with lower angular separation indicating lower perceived curvature. We conducted a (2) × (2) × (2) (occlusion × noise depth × opacity) three-way repeated-measures ANOVA, with perceived curvature as the dependent variable. 
Results
Gauge probe settings
The gauge probe settings made for each condition are plotted in Figure 10 for one model observer. Note that the perceived surface orientation has less slant relative to the viewing direction (i.e., was more fronto-parallel) for the translucent surfaces compared with the opaque surfaces, particularly when the self-occluding contour was hidden from view. 
Figure 10
 
Images showing a model observer's gauge probe settings for each of the eight conditions in Experiment 3. (A) Opaque surfaces with low (left) and high (right) noise depth. (B) Translucent surfaces with low (left) and high (right) noise depth. Gauge probes overlaid on images of surfaces in full view or with the bounding contour occluded.
Figure 10
 
Images showing a model observer's gauge probe settings for each of the eight conditions in Experiment 3. (A) Opaque surfaces with low (left) and high (right) noise depth. (B) Translucent surfaces with low (left) and high (right) noise depth. Gauge probes overlaid on images of surfaces in full view or with the bounding contour occluded.
In order to compare perceived and physical surface orientations, we computed the slant of the gauge probe settings (relative to the viewer) as a proportion of the ground-truth orientation of the surface normal. Table 1 shows the mean and 95% confidence interval of the slant of the apparent surface normal averaged over observers and probe locations. These data show that the perceived surface orientation was significantly flatter than veridical for all conditions. The underestimation of slant was greater for translucent than opaque surfaces, and when the self-occluding contour was hidden from view. 
Table 1
 
Slant of perceived surface normals in proportion to ground truth (mean of all observers).
Table 1
 
Slant of perceived surface normals in proportion to ground truth (mean of all observers).
Relationship between luminance and angular separation
The results shown in Figure 11 plot luminance as a function of angular separation relative to the pole that had the highest negative correlation for each of the eight conditions. 
Figure 11
 
Plots of luminance as a function of angular separation (between the perceived surface normal vector and the primary lighting vector) for the translucent and opaque surfaces of high and low noise depth, both with and without an occluder. Insets show the corresponding images for each of the conditions plotted.
Figure 11
 
Plots of luminance as a function of angular separation (between the perceived surface normal vector and the primary lighting vector) for the translucent and opaque surfaces of high and low noise depth, both with and without an occluder. Insets show the corresponding images for each of the conditions plotted.
Pearson's correlations revealed a significant negative correlation between luminance and angular separation for the full opaque high noise depth surface (r120 = −0.84, R2 = 0.71, p < 0.001), the occluded opaque high noise depth surface (r120= −0.82, R2 = 0.68, p < 0.001), the full opaque low noise depth surface (r120 = −0.79, R2 = 0.63, p < 0.001), the occluded opaque low noise depth surface (r120 = −0.78, R2 = 0.62, p < 0.001), the full translucent high noise depth surface (r120= −0.84, R2 = 0.70, p < 0.001), the occluded translucent high noise depth surface (r120 = −0.68, R2 = 0.47, p < 0.001), the full translucent low noise depth surface (r120 = −0.75, R2 = 0.57, p < 0.001), and the occluded translucent low noise depth surface (r120 = −0.82, R2 = 0.67, p < 0.001). 
Perceived curvature
Figure 12 shows the average perceived curvature across observers, where higher values indicate higher angular separations between adjacent gauge probe settings. The three-way repeated-measures ANOVA revealed significant main effects of opacity and occlusion, such that perceived curvature was lower for translucent images than for opaque surfaces, F(1, 3) = 10.27, p < 0.05, and for occluded images than for nonoccluded images, F(1, 3) = 13.17, p < 0.05. There was no main effect of noise depth on perceived curvature, F(1, 3) = 3.86, p = 0.14. However, we did find an interaction effect between noise depth and material. Specifically, the size of the difference in perceived curvature between translucent and opaque surfaces was greater when relief was higher (i.e., at higher noise depths), F(1, 3) = 27.39, p < 0.05. There were no other significant interaction effects (ps > 0.05). 
Figure 12
 
Graphs of perceived curvature for the translucent and opaque images of high and low noise depth, both with and without an occluder. Perceived curvature was operationalized as the average angular separation between adjacent surface normals (horizontally and vertically).
Figure 12
 
Graphs of perceived curvature for the translucent and opaque images of high and low noise depth, both with and without an occluder. Perceived curvature was operationalized as the average angular separation between adjacent surface normals (horizontally and vertically).
Discussion
Overall, the results suggest that our perception of local surface orientation underestimates slant, which is consistent with previous studies showing an underestimation of depth (e.g., Mingolla & Todd, 1986). The underestimation of slant was greater for translucent than opaque surfaces and for surfaces without visible self-occluding edges. Perceived surface orientation was correlated with image intensity for both translucent and opaque surfaces. Specifically, as luminance declined, angular separation increased relative to one possible illumination direction for both translucent and opaque surfaces. Hence, the misperception of local surface orientation in translucent surfaces appears to be due to these surfaces generating shallow shading gradients that mimic opaque surfaces with lower mesoscopic curvature. Although our gauge probe data do not directly measure perceived surface curvature, our data on the angular separation between adjacent probe locations suggests that translucent surfaces appear to have lower curvature than opaque surfaces. This is consistent with the results of Experiment 1 and Experiment 2 in which translucent surfaces appeared to have fewer bumps than opaque surfaces. 
General discussion
In this study, we explored the effect of translucency on the perception of 3D shape. In Experiment 1, we found that the perceived number of bumps was lower for translucent surfaces compared with opaque surfaces, and that this effect was stronger for higher levels of noise depth. In Experiment 2, we found that the difference in the perceived number of bumps between translucent and opaque surfaces was larger in the absence of the self-occluding contour, which indicated that the shape of the bounding contour is used in-part to perceptually infer mesoscopic shape. 
The results of Experiment 1 and 2 suggest that the perceived mesoscopic shape of translucent surfaces is misperceived because their luminance gradients have similar contrast and spatial frequency to smoother opaque surfaces. To test this idea, in Experiment 3 we measured perceived surface orientation of multiple probe points across the surface, using the gauge probe task (Koenderink et al., 1992). In this task, observers adjusted the orientation of various gauge probes across the image, so that the probes appeared on the tangent plane of the shaded surface depicted. We found that gauge probe orientations were arranged more fronto-parallel for images of translucent surfaces compared with images of opaque surfaces and the orientation of the surface's physical surface normals. We also found that perceived curvature was lower for translucent compared with opaque surfaces, suggesting surface translucency has the effect of decreasing the perception of not only the number of apparent mesoscopic shape perturbations, but also the rate of local surface curvature. We also correlated luminance with perceived surface orientation relative to the primary illumination direction at each probe point, and found that perceived surface orientation varied systematically with image intensity for both opaque and translucent surfaces. This suggests that the misperception of 3D shape in our translucent surfaces may be due to the visual system recovering 3D shape from the translucent luminance gradients as though they were opaque shading gradients. 
Overall, our findings support the notion that the study of perceived shape is further complicated when considering variations in the material properties of surfaces (i.e., the opacity of the surface). Previous studies have shown that the visual system underestimates the surface curvature and depth of opaque surfaces (Curran & Johnston, 1996; Mingolla & Todd, 1986; Todd & Mingolla, 1983; Wijntjes et al., 2012), and our results suggest that significantly larger errors in perceived 3D shape occur for translucent surfaces. We found that observers underestimated the number of bumps (Experiments 1 and 2) and perceived local curvature in translucent surfaces relative to opaque surfaces (Experiment 3). Overall, our findings agree with previous studies, which have demonstrated compelling changes in the perceived shape of surfaces with complex reflectance functions, such as surfaces with specular reflections (Mooney & Anderson, 2014), as well as velvety surfaces (Wijntjes et al., 2012). Our data further encourage future investigations of 3D shape perception in other complex materials. For instance, as the findings of Fleming, Jakel, and Maloney (2011) would suggest, shape judgments may be biased (relative to veridical) by the image distortions generated by the refractive properties of thick transparent objects. However, midlevel processing of 3D shape and assumed structure of the light field also appears to be important for the perception of transparency in these refractive objects (Kim & Marlow, 2016). 
Experiments 1 and 2 also found that the effect of translucency on perceived shape appears to occur at higher noise depths rather than lower noise depths. This finding points to a blurring effect of higher spatial frequency shading and the estimated mesoscopic shape of translucent objects. Studies have shown that translucent objects generate images that have lower shading contrast overall, compared with opaque versions of the same surface geometry examined under identical viewing and lighting conditions (Fleming & Bulthoff, 2005; Motoyoshi, 2010). Motoyoshi (2010) noted that translucent objects tend to generate nonspecular shading that primarily lacks contrast in higher spatial frequencies. However, that study did not measure the effects of changing surface opacity on perceived 3D shape. Our measurements indicate that perceived mesoscopic shape is highly dependent on opacity, and the perceived shape of translucent surfaces can be explained by the associated changes in shading (Experiment 3). This suggests that the decline in shading contrast caused by subsurface scattering of light is likely to underlie the smoothing of perceived shape profiles for translucent surfaces. The strong correlations between perceived surface orientation and luminance found in Experiment 3 also suggest that the shape-from-shading estimates made for translucent surfaces may be achieved in a similar way to those made for opaque surfaces. As such, translucent surfaces with greater relief can have shading profiles that are similar to opaque surfaces with lower relief, resulting in the visual system misperceiving the shape of these translucent objects. This reasoning also agrees with the observation made by Wijntjes et al. (2012), whereby identical shading assumptions to those attributed to matte surfaces appeared to be used to estimate the 3D shape of velvety surfaces. 
Upon interpreting the material effects observed on perceived mesoscopic shape, some methodological issues should be noted. Observers were instructed to estimate the number of bumps on the surfaces depicted in images we presented. This estimate is a different perceptual measure to estimating the local curvature of surfaces. It might be advantageous in future to obtain measures of perceived curvature at increasing levels of translucency. In all our experiments we only considered one level of translucency within the same light field. However, the structure of luminance variations in an image depends on interactions between shape, opacity, and the structure of the light field. Hence, future work should consider the potential effects that varying illumination and material composition might have on the perception of mesoscopic shape parameters. Another consideration is that the effect of translucency on perceived shape may be partially accounted for by the simple blurring effect that arises from subsurface scattering (Fleming & Bulthoff, 2005), rather than statistical properties of the images per se (see also Motoyoshi, 2010; Giesel & Zaidi, 2013). 
In summary, our study is the first, to our knowledge, to systematically investigate the effect of translucency on perceived 3D shape. The perceived 3D shape of a translucent surface is significantly “smoother” than its physical shape, and this appears to be related to translucent image gradients having lower contrast at high spatial frequencies than identically-shaped opaque surfaces. These findings support the view that shading is not only influenced by the geometry of a surface or the structure of the light source, but also the reflectance properties of the surface, and this appears to lead to compelling changes in the perception of shape. There are a multitude of materials that exist in the real world, each exhibiting a combination of many different reflectance and/or transmittance properties (e.g., diffuse, specular, translucent, or transparent). By understanding how shape perception changes as a function of each material property in isolation, we can hopefully improve the accuracy of models to account for the perception of potentially complex configurations of these materials in the real world. 
Acknowledgments
The authors would like to thank S. Prajapati for his contributions to some of the figures in this paper and B. Anderson for his valuable advice and support. This research was funded by an Australian Research Council (ARC) Future Fellowship awarded to JK (FT140100535). 
Commercial relationships: none. 
Corresponding author: Nahian Chowdhury. 
Address: School of Optometry and Vision Science, University of New South Wales, Kensington, New South Wales, Australia. 
References
Ashikmin, M., Premože, S., & Shirley, P. (2000). A microfacet-based BRDF generator. Proceedings of the 27th annual conference on computer graphics and interactive techniques, SIGGRAPH '00, 65–74.
Curran, W., & Johnston, A. (1996). The effect of illuminant position on perceived curvature. Vision Research, 10, 1399–1410.
Erens, R. G. F., Kappers, A. M. L., & Koenderink, J. J. (1993). Perception of local shape from shading. Perception & Psychophysics, 54, 145–156.
Fleming, R. W., & Bulthoff, H. H. (2005). Low-level image cues in the perception of translucent materials. ACM Transactions on Applied Perception, 2, 346–382.
Fleming, R. W., Dror, R. O., & Adelson, E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3 (5): 3, 347–368, doi:10.1167/3.5.3. [PubMed] [Article]
Fleming, R. W., Jakel, F., & Maloney, L. T. (2011). Visual perception of thick transparent materials. Psychological Science, 22, 812–820.
Georgieva, S. S., Todd, J. T., Peeters, R., & Orban. G. A. (2008). The extraction of 3D shape from texture and shading in the human brain. Cerebral Cortex, 18, 2416–2438.
Giesel, M., & Zaidi, Q. (2013). Frequency-based heuristics for material perception. Journal of Vision, 13 (14): 7, 1–19, doi:10.1167/13.14.7. [PubMed] [Article]
Gkioulekas, I., Xiao, B., Zhao, S., Adelson, E. H., Zickler, T., & Bala, K. (2013). Understanding the role of the phase function in translucent appearance. ACM Transactions on Graphics, 32, 1–19.
Ho, Y., Landy, M. S., & Maloney, L. T. (2008). Conjoint measurement of gloss and surface texture. Psychological Science, 19, 196–204.
Horn, B. K. P. (1970). Shape from shading: A method for obtaining shape of a smooth opaque object from one view (Research report No. 232). Retrieved from http://dspace.mit.edu/handle/1721.1/6885
Jensen, H. W., & Buhler, J. (2001). A rapid hierarchical rendering technique for translucent materials. ACM Transactions on Graphics, 21, 576–581.
Kim, J., Marlow, P., & Anderson, B. L. (2011). The perception of gloss depends on highlight congruence with surface shading. Journal of Vision, 11 (9): 4, 1–19, doi:10.1167/11.9.4. [PubMed] [Article]
Kim, J., & Marlow, P. J. (2016) Turning the world upside down to understand perceived transparency. i-Perception, 7.
Kleffner, D. A., & Ramachandran, V. S. (1992). On the perception of shape from shading. Perception & Psychophysics, 52, 18–36.
Knill, D. C., & Kersten, D. (1991). Apparent surface curvature affects lightness perception. Letters to Nature, 351, 228–230.
Koenderink, J. J., van Doorn, A. J., & Kappers, A. M. L. (1992). Surface perception in pictures. Perception and Psychophysics, 52, 487–496.
Lu, R., Koenderink, J. J., & Kappers, A. M. L. (1998). Optical properties (bidirectional reflection distribution functions) of velvet. Applied Optics, 37, 5974–5984.
Mamassian, P., & Kersten, D. (1996). Illumination, shading and the perception of local orientation. Vision Research, 36, 2351–2367.
Mingolla, E., & Todd, J. T. (1986). Perception of solid shape from shading. Biological Cybernetics, 53, 137–151.
Mooney, S. W. J., & Anderson, B. L. (2014). Specular image structure modulates the perception of three-dimensional shape. Current Biology, 24, 2732–2724.
Motoyoshi, I. (2010). Highlight-shading relationship as a cue for the perception of translucent and transparent materials. Journal of Vision, 10 (9): 6, 1–11, doi:10.1167/10.9.6. [PubMed] [Article]
Nefs, H. T., Koenderink, J. J., & Kappers, A. M. L. (2006). Shape-from-shading for matte and glossy objects. Acta Psychologica, 121, 297–316.
Nicodemus, F. (1965). Directional reflectance and emissivity of an opaque surface. Applied Optics, 4, 767–775.
Ramachandran, V. S. (1988). Perception of shape from shading. Letters to Nature, 331, 163–166.
Tittle, J. S., & Braunstein, M. L. (1993). Recovery of 3-D shape from binocular disparity and structure from motion. Perception & Psychophysics, 54, 157–169.
Todd, J. T., & Mingolla, E. (1983). Perception of surface curvature and direction of illumination from patterns of shading. Journal of Experimental Psychology: Human Perception and Performance, 9, 583–595.
Todorović, D. (2014). How shape from contours affects shape from shading. Vision Research, 103, 1–10.
Wijntjes, M. W. A., Doerschner, K., Kucukogly, G., & Pont, S. C. (2012). Relative flattening of velvet and matte 3D shapes: Evidence for similar shape-from-shading computations. Journal of Vision, 12 (1): 2, 1–11, doi:10.1167/12.1.2. [PubMed] [Article]
Figure 1
 
A sphere with a Lambertian surface illuminated from above. Luminance (I) varies with the cosine of the angle α between the outwards facing surface normal (N) and the lighting direction (L). However, this angle represents the elevation component (in black) of the surface normal only, while the azimuth component (in red) does not vary with luminance. As such, there are many azimuth values for any given elevation value, suggesting that the intensity of light is insufficient in disambiguating the exact orientation of the surface normal.
Figure 1
 
A sphere with a Lambertian surface illuminated from above. Luminance (I) varies with the cosine of the angle α between the outwards facing surface normal (N) and the lighting direction (L). However, this angle represents the elevation component (in black) of the surface normal only, while the azimuth component (in red) does not vary with luminance. As such, there are many azimuth values for any given elevation value, suggesting that the intensity of light is insufficient in disambiguating the exact orientation of the surface normal.
Figure 2
 
Light is refracted by thick translucent materials (rather than just reflected) and upon impact with particles suspended in the translucent medium, it may either scatter backward, forward, or equally in all directions depending on the phase function of these particles.
Figure 2
 
Light is refracted by thick translucent materials (rather than just reflected) and upon impact with particles suspended in the translucent medium, it may either scatter backward, forward, or equally in all directions depending on the phase function of these particles.
Figure 3
 
An opaque (left) and translucent (right) candle containing identical concave “grooves.” Note that the grooves appear “washed out” and shallower on the translucent than opaque candle. The opaque version was constructed by painting the translucent candle with white flat-matte acrylic paint.
Figure 3
 
An opaque (left) and translucent (right) candle containing identical concave “grooves.” Note that the grooves appear “washed out” and shallower on the translucent than opaque candle. The opaque version was constructed by painting the translucent candle with white flat-matte acrylic paint.
Figure 4
 
Images of the stimuli used in Experiment 1. The top row contains the opaque stimuli and bottom row contains the translucent stimuli. Each row consists of five levels of noise depth, increasing from 0 to 4 (generating different spatial-frequency bumps of 0.02, 0.04, 0.06, 0.08, and 0.10 bumps/°, respectively).
Figure 4
 
Images of the stimuli used in Experiment 1. The top row contains the opaque stimuli and bottom row contains the translucent stimuli. Each row consists of five levels of noise depth, increasing from 0 to 4 (generating different spatial-frequency bumps of 0.02, 0.04, 0.06, 0.08, and 0.10 bumps/°, respectively).
Figure 5
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth. Different colors are used to plot data obtained with translucent (red) and opaque conditions (black).
Figure 5
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth. Different colors are used to plot data obtained with translucent (red) and opaque conditions (black).
Figure 6
 
Examples of the stimuli used in the occluder condition of Experiment 2. The surfaces were the same opaque (A) and translucent (B) surfaces as those used in Experiment 1, except that the bounding contour was no longer visible.
Figure 6
 
Examples of the stimuli used in the occluder condition of Experiment 2. The surfaces were the same opaque (A) and translucent (B) surfaces as those used in Experiment 1, except that the bounding contour was no longer visible.
Figure 7
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth, across translucent and opaque conditions, with and without an occluder present.
Figure 7
 
Mean probabilities (±SEM) of perceiving more bumps for each level of noise depth, across translucent and opaque conditions, with and without an occluder present.
Figure 8
 
Horizontal luminance profiles for central image regions of the opaque object (left) and translucent object (right). Different colors show the luminance at each pixel location for different noise depths. The scale in the upper right is 50 pixels horizontal and 0.3 intensity. Note that traces have been vertically displaced for clarity.
Figure 8
 
Horizontal luminance profiles for central image regions of the opaque object (left) and translucent object (right). Different colors show the luminance at each pixel location for different noise depths. The scale in the upper right is 50 pixels horizontal and 0.3 intensity. Note that traces have been vertically displaced for clarity.
Figure 9
 
(A) Example of a trial in Experiment 4. The red circle with the line segment (inset on the bottom right) shows the gauge probe which could be rotated in azimuth and elevation to match the orientation of the surface at the probe point. (B) The 11 × 11 probe matrix, both with and without an occluder, from which surface orientation was measured.
Figure 9
 
(A) Example of a trial in Experiment 4. The red circle with the line segment (inset on the bottom right) shows the gauge probe which could be rotated in azimuth and elevation to match the orientation of the surface at the probe point. (B) The 11 × 11 probe matrix, both with and without an occluder, from which surface orientation was measured.
Figure 10
 
Images showing a model observer's gauge probe settings for each of the eight conditions in Experiment 3. (A) Opaque surfaces with low (left) and high (right) noise depth. (B) Translucent surfaces with low (left) and high (right) noise depth. Gauge probes overlaid on images of surfaces in full view or with the bounding contour occluded.
Figure 10
 
Images showing a model observer's gauge probe settings for each of the eight conditions in Experiment 3. (A) Opaque surfaces with low (left) and high (right) noise depth. (B) Translucent surfaces with low (left) and high (right) noise depth. Gauge probes overlaid on images of surfaces in full view or with the bounding contour occluded.
Figure 11
 
Plots of luminance as a function of angular separation (between the perceived surface normal vector and the primary lighting vector) for the translucent and opaque surfaces of high and low noise depth, both with and without an occluder. Insets show the corresponding images for each of the conditions plotted.
Figure 11
 
Plots of luminance as a function of angular separation (between the perceived surface normal vector and the primary lighting vector) for the translucent and opaque surfaces of high and low noise depth, both with and without an occluder. Insets show the corresponding images for each of the conditions plotted.
Figure 12
 
Graphs of perceived curvature for the translucent and opaque images of high and low noise depth, both with and without an occluder. Perceived curvature was operationalized as the average angular separation between adjacent surface normals (horizontally and vertically).
Figure 12
 
Graphs of perceived curvature for the translucent and opaque images of high and low noise depth, both with and without an occluder. Perceived curvature was operationalized as the average angular separation between adjacent surface normals (horizontally and vertically).
Table 1
 
Slant of perceived surface normals in proportion to ground truth (mean of all observers).
Table 1
 
Slant of perceived surface normals in proportion to ground truth (mean of all observers).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×