Free
Research Article  |   September 2010
Highlight–shading relationship as a cue for the perception of translucent and transparent materials
Author Affiliations
Journal of Vision September 2010, Vol.10, 6. doi:10.1167/10.9.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Isamu Motoyoshi; Highlight–shading relationship as a cue for the perception of translucent and transparent materials. Journal of Vision 2010;10(9):6. doi: 10.1167/10.9.6.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

Natural surfaces, such as those of food and drink, have translucent properties. Translucent materials involve complex optics, such as sub-surface scattering and refraction, but humans can easily distinguish them from opaque materials. Here, we investigated image features that are diagnostic of the perceived translucency and transparency, focusing on the fact that variations in the opacity of a surface affect largely the non-specular component (shading pattern) of an image and little the specular component (highlights). In a simple rating experiment with computer-generated objects, we show that the non-specular image component tends to be blurred, faint, and even partially contrast-reversed for objects that appear more translucent or transparent. A subsequent experiment further demonstrated that manipulation of the contrast and blur of the non-specular image component dramatically alters the apparent translucency of an opaque object. The results support the notion that the spatial and contrast relationship between specular highlights and non-specular shading patterns is a robust cue for the perceived translucency and transparency of three-dimensional objects.

Introduction
Humans can effortlessly perceive not only the lightness and color of an object but also various other properties, such as its glossiness and roughness (Adelson, 2001). Previous work on surface-quality perception has generally addressed the apparent color and lightness of flat and matte surfaces arranged in simple (Heinemann, 1955; Jameson & Hurvich, 1961; Wallach, 1948) or complex (Adelson, 1999; Bloj, Kersten, & Hurlbert, 1999; Boyaci, Maloney, & Hersh, 2003; Brainard, 1998; Gilchrist, 2006) scenes. Recently, an increasing number of studies have been employing natural surfaces with complex three-dimensional (3D) structures and examining a variety of attributes, such as glossiness (Beck & Prazdny, 1981; Blake & Bülthoff, 1990; Fleming, Dror & Adelson, 2003; Nishida & Shinya, 1998), bumpiness (Ho, Landy, & Maloney, 2006), and translucency (Fleming & Bülthoff, 2005). Some studies have explored and revealed the textural features of a surface image that are useful as cues for estimating a certain material property. For example, Motoyoshi, Nishida, Sharan, and Adelson (2007) and Sharan, Li, Motoyoshi, Nishida, and Adelson (2008) showed that skewness (or something like it) in a luminance histogram is a robust cue for perceiving glossiness and lightness. Ho, Landy, and Maloney (2008) also demonstrated a strong correlation between the luminance contrast of the surface image and the apparent glossiness and bumpiness. These findings reveal an important information source for the visual estimation of glossiness and bumpiness, even though they do not always entirely explain the perception (Anderson & Kim, 2009; Motoyoshi et al., 2007). 
Many natural surfaces, including human skin and food, have translucent properties. Is there also a diagnostic textural feature for perceiving translucency that can be used in the way that highlights are used with respect to glossiness? Traditional studies of transparency and translucency perception have addressed the question of whether a flat surface appears to be a layer through which a patterned background is seen (Beck, Prazdny, & Ivry, 1984; Fulvio, Singh, & Maloney, 2006; Khang & Zaidi, 2002; Metelli, 1974; Singh & Anderson, 2002a, 2002b). These studies have demonstrated the importance of spatial and luminance relationships between patterns within the target region and its surroundings. However, they do not account for why many 3D objects in the real world, such as pieces of porcelain, candles, and grapes, appear translucent even in the absence of a clearly visible background and surround. 
Light projected onto a translucent 3D object is not only reflected but also transmitted, refracted, absorbed, and scattered beneath the surface. As a result, other surfaces behind the object are sometimes visible, although they are blurred and distorted as a result of refraction and sub-surface scattering. Even if the background is virtually invisible, scattered reflections make the image of a translucent object different from that of an opaque one. These optical effects have been extensively studied in research on photo-realistic computer graphics (Jensen, 2001; Jensen, Marschner, Levoy, & Hanrahan, 2001). 
On the other hand, almost no work has examined how the human visual system estimates the translucency of a 3D object from a single image. The notable exception is Fleming and Bülthoff (2005), who reported an elegant analysis on the perception of translucency. Using computer-generated 3D objects, they investigated the relationships between distal and proximal stimulus parameters and perception and reported basic observations, including the following: (1) Highlights are necessary if the object is to be perceived as naturally translucent, i.e., completely matte translucent surfaces do not give a natural impression of translucency. (2) Images of opaque and translucent objects have different luminance histograms but a similar spatial pattern of luminance gradients (isophotes). (3) As the object becomes more translucent, the luminance contrast of the image (including highlights) is decreased and the spatial edges are blurred. On the other hand, they also found that simple image statistics, such as luminance contrast and skewness, do not predict the perceived translucency. Indeed, an opaque surface does not appear translucent merely because its contrast is low. Fleming and Bülthoff (2005) suggested the involvement of mid-level visual representations. 
The present study explores image features that are correlated with the perceived translucency and transparency of 3D objects. In particular, we focus here on the fact that optical effects in translucent materials do not, or little, affect the patterns of specular highlights. We asked human observers to rate the apparent translucency of computer-generated objects with various opacities and examined the relationships between the rated translucency and simple image statistics of the non-specular part of the image. The results revealed a simple tendency for the object to appear more translucent (or transparent) to observers, when the luminance contrast of the non-specular image component was reduced and sometimes reversed. We also found that when the contrast of the non-specular component was artificially reduced and/or reversed, an opaque object was altered so that it appeared translucent or transparent. On the basis of these findings, we suggest that the mismatch between specular highlights and non-specular shading patterns acts as a simple cue for perceived translucency of objects. 
Experiment 1
To collect the basic perceptual data, we first examined the appearance of 3D objects with various translucencies. 
Methods
Observers
Eight naive paid volunteers (five females and three males; not researchers) and the author (male) served as subjects. They were 29.4 (SD = 5.4) years old on average and had normal or corrected-to-normal vision. The experiment was approved by the NTT Ethics Committee and consent forms were completed. 
Apparatus and stimuli
Visual stimuli were presented on a CRT (SONY GDMF500R, 160 Hz) controlled with a graphics card (CRS ViSage). The images of objects were generated using commercial computer graphics software (NewTek LightWave 9.6). We employed the four objects shown in Figure 1 (Bust, Lion, Buddha, and Dragon), all of which had a complex shape as found with natural objects. Their 3D geometry data were taken from a database at the Stanford Computer Graphics Laboratory (http://graphics.stanford.edu/) and from a commercial database (Evermotion). All the objects were glossy and had sharp specular reflections. Each object was rendered with two types of natural illumination using Debevec's (1998) light-probe images; we used “Eucalyptus” and “Building” because the resulting object looks naturally glossy. The direction of the light-probe images was close to the original but slightly adjusted so that the surface was not largely covered with extremely bright highlights. We used neither simply shaped objects, such as spheres, nor point-light-source illumination because they are rare in the real world. 
Figure 1
 
Examples of stimuli used in the experiment.
Figure 1
 
Examples of stimuli used in the experiment.
We rendered each object for nine different opacity settings. For six of them, sub-surface scattering was simulated with a variable depth parameter (0, 0.13, 0.25, 0.5, 1, or 2 m; the “SSS2” algorithm of the software). It was assumed that the light ray was refracted with a coefficient of 1.5 (e.g., glass). The resulting objects appeared perfectly opaque, marble-like, or wax-like. For the other three surfaces, in addition to the sub-surface scattering of 2 m, the “transparency” parameter was increased (25%, 50%, and 100%) to allow the light of the environmental illumination to pass through the object. It was assumed that the light was blurred and refracted during the penetration. The resulting object appeared highly translucent, or transparent, like jelly or glass. Since the transmitting light was extensively blurred, it was impossible for subjects to recognize the scene behind these objects. The optical simulation algorithms used in the software are proprietary. For experiment replication, we provide sample bitmap images as Supplementary material
The output RGB image was converted to gray scale using a conventional NTSC formula (i.e., luminance = 0.299R + 0.587G + 0.114B). The mean intensity was multiplicatively adjusted so that the image appeared realistic on the CRT. We utilized the14-bit look-up table of the graphics card in order to display the stimuli with smooth shading and vivid highlights. The pixel intensity of each gray-scale image was powered by 1/2.2 and drawn on VRAM so that the intensity histogram would be closer to Gaussian. The image was then powered by 2.2 using the look-up table to get back to the linear luminance image on the CRT. To prevent the background pattern surrounding the object from influencing the judgment, it was replaced by a dark uniform field of 3.8 cd/m2. Pixels that exceeded the maximum luminance of the CRT monitor (148 cd/m2) were clipped. There were a total of 72 images: 4 objects × 2 types of illumination × 9 translucencies (six sub-surface scattering depths and three transparencies). The subjects reported that all images looked very realistic. 
Procedure
We examined the apparent translucency of objects using a simple rating task. We adopted a rating task to investigate coarse variations in the perceived translucency, or transparency as a familiar dimension of surface property (like glossiness and bumpiness), rather than as the perceptual correlate of the sub-surface scattering depth or light transmittance. 
It may appear possible to have had the subjects rate translucency and transparency independently, even though it is unclear whether they are independent. However, we could not ask our observers (Japanese) to do this because the Japanese language has only one word, “toumei,” to describe non-opaque surfaces. They could only rate the stimuli along a single dimension ranging from opaque, translucent, to highly translucent. Thus we simply asked them to rate translucency (or transparency) in that sense. 
The subjects freely viewed the stimulus from a distance of 2 m. The stimulus subtended 7.3 × 7.3 deg in visual angle. All subjects viewed the same stimulus conditions: within-subject design. In an experimental session of 72 trials, stimuli were presented in random order. In each trial, the subjects were asked to rate the object's apparent translucency on a five-point scale: opaque (0), translucent like marble or wax (1–2), or highly translucent like jelly or glass (3–4). The subjects were allowed to imagine a certain prototype (e.g., white marble) but were instructed to rate the translucency along a continuous scale as far as possible. Subjects could view the stimulus with no time limit, but they responded within a few seconds in most of trials. The stimulus was displayed until the subjects responded. Each subject repeated the sessions at least four times. The subjects were shown all the stimuli in advance to establish a criterion. This might have produced a potential bias in the subjects' response, but we found that the data of the present experiment were highly correlated (r = 0.97) with those of a pilot experiment in which subjects (n = 7, only for Lion) were not shown with all stimuli in advance. 
Image analysis
For the luminance image used in the experiment, we calculated simple statistics, such as RMS contrast. They were calculated not only for pixel luminance but also for sub-bands of various spatial frequencies (4–64 cycles/image; 0.6–8.7 cycles/deg). The sub-band images were derived by using a Gaussian band-pass filter with a bandwidth of 1 octave. To minimize the impact of the contours of the object itself, we only considered a small region of the object, defined by extensively blurring the mask image of the object region and binarizing it. It looked like somewhat smaller and smoothed version of the object's silhouette. 
Results
Figure 2 shows the subjects' translucency ratings plotted as a function of the simulated depths of sub-surface scattering (circles) and of light penetration (squares). It is found that the rating increases with the two optical effects. Now we ask what image properties of these surfaces are related to the perceived (rated) translucency. 
Figure 2
 
Mean translucency ratings of objects as a function of the simulated translucency parameters. The lower abscissa represents the simulated depth of sub-surface scattering, and the upper abscissa represents the simulated amount transparency. Individual lines show the results for each object and illumination. The error bars are ±1 SEM across subjects.
Figure 2
 
Mean translucency ratings of objects as a function of the simulated translucency parameters. The lower abscissa represents the simulated depth of sub-surface scattering, and the upper abscissa represents the simulated amount transparency. Individual lines show the results for each object and illumination. The error bars are ±1 SEM across subjects.
The upper panels in Figure 3 show four stimuli that appeared to have different translucencies. A simple sphere of the same material is also shown in the lower left corner of each image. The inset shows a joint histogram, which represents the pixel-intensity correlation between the opaque object (left) and translucent objects (others). These plots indicate that, as pointed out by Fleming and Bülthoff (2005), the relationships are complex but not random. It appears to be difficult to find a simple word to describe the ways in which the images of translucent and opaque surfaces differ. 
Figure 3
 
Images of objects with different translucencies. The number denotes the mean translucency rating. The leftmost image shows an opaque object. The upper images are the original images, and the lower ones show the image generated with no specular reflection. The insets show the joint luminance histograms between an opaque object (abscissa) and the corresponding translucent object (ordinate).
Figure 3
 
Images of objects with different translucencies. The number denotes the mean translucency rating. The leftmost image shows an opaque object. The upper images are the original images, and the lower ones show the image generated with no specular reflection. The insets show the joint luminance histograms between an opaque object (abscissa) and the corresponding translucent object (ordinate).
However, inspecting these images, we notice that the shape and intensity of the highlights are relatively constant across all translucency levels. This is a natural outcome of the fact that the highlights, which originate from specular reflections on the surface, are unaffected in principle by scattering and refraction inside the surface. Thus, the translucency only affects the image of the non-specular part of the object. 
We then analyzed the images without specular highlights. The lower panels of Figure 3 show the images rendered with no specular reflection; quantitatively similar results were obtained when the apparent highlights determined by hand were not included in the calculation. It is clearly shown that the non-specular component tends to have lower contrast and becomes blurred in objects that appear translucent. For an object that appears even more translucent, or transparent, the image again becomes sharp and has high contrast, but it appears to have an opposite sign of contrast to the opaque object. This reversal in contrast is more clearly seen in the sphere. The reversal is also found in the joint histogram, which indicates a weak negative correlation. 
To examine the contrast and blur in the non-specular component images quantitatively, we calculated the RMS contrast for a range of spatial frequency bands and for the pixel luminance. Figure 4a shows the RMS contrast for different frequency bands (colored) and for pixels (gray), plotted as a function of the translucency rating. The circles show the data for objects with different sub-surface scattering depths, and the squares show the data for objects with different light transmittances. Thick lines represent the data averaged with a step of every eight data points. The RMS contrast decreases as the object becomes to be judged as more translucent for the rating below around 2, and increases again as the object becomes to be judged as further translucent or transparent beyond that level. The changes are more profound for high spatial frequency bands. These results indicate a tendency to judge an object as translucent when its non-specular image component was blurred or had a low contrast. However, objects with sharp and high contrast are not necessarily judged as opaque; they can be judged as highly translucent or transparent as well. According to the observations in Figure 3, a possible feature that may distinguish between the two cases is the contrast polarity of the non-specular image component, which appeared to be random or reversed in transparent objects. To quantify this tendency, we calculated the regression coefficients (i.e., the slope of the joint histogram in the insets of Figure 3) between the non-specular image component of objects with various translucencies and that of an opaque object. Figure 4b shows the results. At high spatial frequencies in particular, the regression coefficient decreases steadily with the rating and becomes zero or slightly negative at high ratings. This indicates that an object tends to be judged highly translucent, or transparent, when the non-specular image component is unpredictable, or partially contrast-reversed, with respect to that expected from the specular highlights. 
Figure 4
 
(a) RMS contrast of the non-specular component of the image (see lower panels in Figure 3) plotted as a function of the mean translucency rating. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray. The error bar is ±1 SEM. Thick lines are the traces of the average. The circles represent the data obtained for objects with various sub-surface scattering depths, and the squares represent the data for objects with various transparencies (see Figure 2). (b) Regression coefficient in the non-specular component image between opaque and translucent objects. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray.
Figure 4
 
(a) RMS contrast of the non-specular component of the image (see lower panels in Figure 3) plotted as a function of the mean translucency rating. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray. The error bar is ±1 SEM. Thick lines are the traces of the average. The circles represent the data obtained for objects with various sub-surface scattering depths, and the squares represent the data for objects with various transparencies (see Figure 2). (b) Regression coefficient in the non-specular component image between opaque and translucent objects. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray.
The above analyses confirm the observations made in Figure 3 and suggest that when the high spatial frequency contrast of the non-specular component image is reduced, or partially reversed, the object is perceived as more translucent or transparent. This seems to be sensible in terms of ecological optics. The directional illumination onto a bumpy opaque surface produces sharp and strong shading patterns on the surface. If the surface is translucent, the light that is reflected and scattered inside the surface makes shading patterns faint. Since the light can travel below the surface for a limited distance, this effect should be more profound for smaller structures, such as tiny surface bumps, resulting in the blur of the shading patterns. If the surface is highly translucent, or transparent, light penetrating the object from many directions in the environment emerges from the object after refraction. This would not necessarily blur the spatial structure, but it can make its phase unpredictable. As a result, the contrast can be reversed in some parts of the non-specular image component. 
Note that in the plot of Figure 4a, the ratings for objects with variable sub-surface scattering depths (circles) are relatively lower than for objects with variable transparencies (squares). In addition, the RMS contrast decreases with the rating in the former and increases in the latter. This dissociation implies the possibility that even among Japanese subjects, the translucency due to sub-surface scattering and the transparency due to light transmittance are assessed as two independent attributes. It is, however, unclear if they are orthogonal. It would be intriguing to investigate the dimensionality in the perception of transparent and translucent materials, as has been done for glossiness and roughness (e.g., Ferwerda, Pellacini, & Greenberg, 2001). 
Experiment 2
To further test how the perceived translucency is influenced by the contrast and blur of the non-specular image component, we manipulated the contrast of the non-specular image while leaving the highlights intact. Figure 5 shows examples of the stimuli, each of which was derived by decomposing the non-specular image component of the original opaque object (bottom left) into high and low spatial frequencies (SFs), then reducing or reversing their contrast independently. These examples demonstrate that manipulation of the non-specular contrast greatly alters the apparent translucency of objects. We examined this effect in a rating experiment. 
Figure 5
 
Examples of stimuli used in the second experiment. Each image was made by multiplying the non-specular component of an opaque object at the bottom left with variable factors from 1.0 to −0.6, separately for high (column) and low (row) spatial frequency bands. Here the images with factors 1.0 (original), 0.3 (contrast reduced), and −0.3 (contrast reversed) are shown.
Figure 5
 
Examples of stimuli used in the second experiment. Each image was made by multiplying the non-specular component of an opaque object at the bottom left with variable factors from 1.0 to −0.6, separately for high (column) and low (row) spatial frequency bands. Here the images with factors 1.0 (original), 0.3 (contrast reduced), and −0.3 (contrast reversed) are shown.
Methods
We employed two images of opaque objects (Lion/Eucalyptus, Buddha/Urban) chosen from the 72 images used in the first experiment. For each, the non-specular component image was decomposed into low and high spatial frequency bands, using a low-pass filter that allowed 16 c/image (2.2 c/deg) and below to pass (the slope was defined by a cosine with a wavelength of 2 octaves) and a complementary high-pass filter. Each sub-band image was then multiplied independently by a factor of 1.0, 0.6, 0.3, 0.0, −0.3, or −0.6. According to the results of Experiment 1 (Figure 4a), at least in our stimuli, the contrast in sub-bands higher than 16 c/image was expected to affect the perceived translucency more profoundly than that in sub-bands lower than 16 c/image. There were 72 images in total: 2 objects × 6 contrasts for high SF × 6 contrasts for low SF. Seven subjects participated the experiment. All of them had also participated in Experiment 1. The subject's task was the same as in Experiment 1. They were asked to rate the apparent translucency of each image on a five-point scale. 
Results
Figure 6 shows the mean translucency rating plotted as a function of the (signed) contrast of the non-specular component at a high spatial frequency (abscissa) and a low spatial frequency (ordinate). The upper panel shows the results for Buddha and the lower panel the results for Lion. The ratings for the original opaque version are shown in the lower left corner. For both spatial frequency bands, the translucency rating increases as the contrast of the non-specular component is decreased. The results support the idea that the contrast and sharpness of the non-specular image component have a strong impact on the judgment of translucency as expected from Experiment 1
Figure 6
 
Translucency rating of objects as functions of the contrast of the non-specular component at high (abscissa) and low (ordinate) spatial frequencies. The upper panel shows the results for Buddha, and the lower panel shows the results for Lion.
Figure 6
 
Translucency rating of objects as functions of the contrast of the non-specular component at high (abscissa) and low (ordinate) spatial frequencies. The upper panel shows the results for Buddha, and the lower panel shows the results for Lion.
The data also show that the contrast at high spatial frequencies has a slightly stronger effect than at low spatial frequencies even when the contrast at a low spatial frequency is reversed (−0.6). For example, the object still appears almost opaque when the high spatial frequency band has a contrast of 1.0 (see the top left panel in Figure 5). Our subjects reported that these surfaces appeared sooty. On the other hand, when the contrast of just the high spatial frequency was reversed, the object appeared translucent (see also the bottom right panel of Figure 5) even when the low spatial frequency band had a high positive contrast (1.0). Interestingly, these objects often appear to have an opaque body covered with a thick translucent or transparent layer, like porcelain. This is in contrast to the observation that objects in which the contrasts of the both frequency bands are reversed appear to be made from a homogeneous translucent material (e.g., the top right panel of Figure 5). These variations in the surface appearance, although not reflected in the simple ratings of translucency in our experiment, imply the possibility that the spatial congruency of patterns across different spatial frequency bands could be useful for the perception of multi-layered surfaces, such as human skin. However, it should also be noted that the relative contribution of different frequency bands can vary with object geometry (e.g., roughness) and illumination conditions (see also Fleming & Bülthoff, 2005; Pont & te Pas, 2006; te Pas & Pont, 2005). 
The control of the apparent translucency by such simple image manipulation may be used as an easy alternative to the physical simulation widely used in 3D computer graphics (e.g., Jensen, 2001; Jensen et al., 2001). Kahn, Reinhard, Fleming, and Bülthoff (2006) also proposed a more complex, and therefore more powerful, method for image-based manipulation of materials. 
Discussion
The present study investigated image features used by human observers to judge the translucency of 3D objects. The results of the two experiments showed that a glossy object tends to be perceived as translucent (or transparent) as the luminance pattern of the non-specular part in the image becomes blurred, faint, and reversed. We suggest that the relationship with respect to contrast and sharpness between the specular highlights and the non-specular body is considered as a robust cue for perceived translucency. 
In the real world, natural surfaces often involve specular reflections. For a wide range of surfaces, these specular reflections faithfully mirror the intensity, color, and pattern of the illumination independent of non-specular (diffuse) reflections. This implies that the relationship between highlights and the other (e.g., diffuse) surface regions can provide a general constraint for the estimation of material properties. For example, if a red surface is completely matte, it is not always clear if it is a red surface with white illumination or a white surface with red illumination, even though it is not impossible to distinguish between the two (Gilchrist & Jacobsen, 1984; Golz & MacLeod, 2002; Ruppertsberg & Bloj, 2007). However, if a surface is glossy, whitish highlights specify that it is a red surface under white illumination (Nishida et al., 2008). Similarly, it is not always clear if a surface with blurry shading is a translucent surface or an opaque surface under less directional or blurry illumination (see also Dror, Willsky, & Adelson, 2004; Hunter, 1975; Pont & te Pas, 2006; te Pas & Pont, 2005). Sharp highlights can specify that it is a translucent surface under directional illumination. In computer-vision studies, such principles are utilized to separate specular highlights and the remaining body scattering from a single or multiple images on the basis of color (Klinker, Shafer, & Kanade, 1987a, 1987b; Shafer, 1985) and spatial pattern (Nayar, Krishnan, Grossberg, & Raskar, 2006). 
Image statistics
How can the visual system compute the relationship between highlights and a shading pattern to estimate translucency? One possibility is that such a relationship is already evident as simple image statistics or texture metrics. This idea may work for the apparent translucency due to sub-surface scattering. 
The solid line in Figure 7a shows the one-dimensional intensity distribution of the image of a surface. The dashed red line shows the non-specular component. The sharp spike in the middle represents a highlight. As shown in Figure 7b, when the surface is translucent, the non-specular component is blurred (dashed red line), i.e., the high-spatial frequency contrast is reduced. On the other hand, highlights remain almost the same, i.e., they keep their strong positive contrast at high spatial frequencies. As a result, the overall image of high spatial frequency bands tends to have strong positive peaks that are sparsely distributed on a flattened baseline (see the right panel in Figure 7b). Thus, as the surface becomes translucent, the positive contrast at high spatial frequencies tends to be spatially sparser, and the negative contrast becomes weaker. Figures 7d and 7e demonstrate that such changes in the positive and negative contrasts affect the apparent translucency of a surface. Figure 7e is derived from Figure 7d via simple contrast manipulation. In this image, the positive contrast of the higher spatial frequency band is squared to make it sparse, and the negative contrast is multiplied by 0.1. Our subjects reported that such a surface appeared translucent, giving it a rating of 1 or 2 on the scale used in the experiment. 
Figure 7
 
Schematic illustrations of 1D intensity distributions of the images of (a) opaque and (b, c) translucent surfaces. The solid lines represent the intensity of the overall image including highlights; the red lines represent the intensity of the non-specular component. The distribution in pixels is shown on the left, and that in a mid/high spatial frequency sub-band is shown in the middle. The typical shape of the sub-band histogram is shown on the right. (d) An image of a glossy opaque surface. (e) For a high spatial frequency band, the positive contrast is squared to make it spatially sparse, and the negative contrast is greatly reduced (×0.1). The surface appears translucent. The right panels in (d) and (e) show the histograms for the high spatial frequency band.
Figure 7
 
Schematic illustrations of 1D intensity distributions of the images of (a) opaque and (b, c) translucent surfaces. The solid lines represent the intensity of the overall image including highlights; the red lines represent the intensity of the non-specular component. The distribution in pixels is shown on the left, and that in a mid/high spatial frequency sub-band is shown in the middle. The typical shape of the sub-band histogram is shown on the right. (d) An image of a glossy opaque surface. (e) For a high spatial frequency band, the positive contrast is squared to make it spatially sparse, and the negative contrast is greatly reduced (×0.1). The surface appears translucent. The right panels in (d) and (e) show the histograms for the high spatial frequency band.
It should be noted, however, that these low-level image statistics cannot account for the perception of highly translucent surfaces that involve contrast reversals in the non-specular component (Figure 7c). In this case, the order of the data in the histogram is partially reversed. It is impossible to capture such changes with histogram moment statistics. We also looked for other “simple” image statistics that may account for the wide range of the perceived translucency, but we failed to find any. 
Layered representations
Another possibility is that the visual system segregates between highlights and shading patterns and encodes their relationships to estimate surface translucency. It has been suggested that the analysis of layered image representations (e.g., intrinsic images) is a powerful strategy for the estimation of surface properties (Anderson & Winawer, 2008; Barrow & Tenenbaum, 1978; Land & McCann, 1971; Tan & Ikeuchi, 2005; Todd, Norman, & Mingolla, 2004). Simple image statistics, such as luminance skewness (Motoyoshi et al., 2007), and the other types of image information, such as color (Klinker et al., 1987a, 1987b; Nishida et al., 2008; Shafer, 1985), motion (Hartung & Kersten, 2002), binocular disparity (Fleming, Torralba, & Adelson, 2004), and border ownership (Anderson & Winawer, 2008) would play important roles in segregation. There is also a possibility that segregation requires a deeper computation including the reconstruction of a 3D shape (Anderson & Kim, 2009; Todd et al., 2004). It is still unclear whether or how the visual system can utilize layered representations to estimate the surface properties. These are particularly intriguing problems that remain to be solved in future studies. 
Supplementary Materials
Supplementary Material - Supplementary Material 
Acknowledgments
The author thanks Shin'ya Nishida and the anonymous reviewers for valuable comments on the draft. A related study was presented at the Second Symposium on Applied Perception in Graphics and Visualization (Motoyoshi, Nishida, & Adelson, 2005). 
Commercial relationships: none. 
Corresponding author: Dr. Isamu Motoyoshi. 
Email: motoyoshi@apollo3.brl.ntt.co.jp. 
Address: 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa, Japan. 
References
Adelson E. H. (1999). Lightness perception and lightness illusions. In Gazzaniga M. S. (Ed.), The new cognitive neurosciences (pp. 339–351). Cambridge, MA: MIT Press.
Adelson E. H. (2001). On seeing stuff: The perception of materials by humans and machines. In Rogowitz B. E. Pappas T. N. (Eds.), Proceedings of the SPIE. Volume 4299: Human Vision and Electronic Imaging VI (pp. 1–12). Bellingham, WA: SPIE.
Anderson B. L. Kim J. (2009). Image statistics do not explain the perception of gloss and lightness. Journal of Vision, 9, (11):10, 1–17, http://www.journalofvision.org/content/9/11/10, doi:10.1167/9.11.10. [PubMed] [Article] [CrossRef] [PubMed]
Anderson B. L. Winawer J. (2008). Layered image representations and the computation of surface lightness. Journal of Vision, 8, (7):18, 1–22, http://www.journalofvision.org/content/8/7/18, doi:10.1167/8.7.18. [PubMed] [Article] [CrossRef] [PubMed]
Barrow H. G. Tenenbaum J. (1978). Recovering intrinsic scene characteristics from images. In Hanson A. R. Riseman E. M. (Eds.), Computer vision systems (pp. 3–26). New York: Academic Press.
Beck J. Prazdny K. Ivry R. (1984). The perception of transparency with achromatic colors. Perception & Psychophysics, 35, 407–422. [CrossRef] [PubMed]
Beck J. Prazdny S. (1981). Highlights and the perception of glossiness. Perception & Psychophysics, 30, 407–410. [CrossRef] [PubMed]
Blake A. Bülthoff H. (1990). Does the brain know the physics of specular reflection? Nature, 343, 165–168. [CrossRef] [PubMed]
Bloj M. G. Kersten D. Hurlbert A. C. (1999). Perception of three-dimensional shape influences colour perception through mutual illumination. Nature, 402, 877–879. [PubMed]
Boyaci H. Maloney L. T. Hersh S. (2003). The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. Journal of Vision, 3, (8):2, 541–553, http://www.journalofvision.org/content/3/8/2, doi:10.1167/3.8.2. [PubMed] [Article] [CrossRef]
Brainard D. H. (1998). Color constancy in the nearly natural image 2 Achromatic loci. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 15, 307–325. [CrossRef] [PubMed]
Debevec P. E. (1998). Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. Proceedings of ACM SIGGRAPH 1998 (pp. 189–198).
Dror R. O. Willsky A. S. Adelson E. H. (2004). Statistical characterization of real-world illumination. Journal of Vision, 4, (9):11, 821–837, http://www.journalofvision.org/content/4/9/11, doi:10.1167/4.9.11. [PubMed] [Article] [CrossRef]
Ferwerda J. A. Pellacini F. Greenberg D. P. (2001). A psychophysically based model of surface gloss perception. Proceedings of SPIE Human Vision and Electronic Imaging, 4299, 291–301.
Fleming R. W. Bülthoff H. H. (2005). Low-level image cues in the perception of translucent materials. ACM Transactions on Applied Perception, 2, 346–382. [CrossRef]
Fleming R. W. Dror R. O. Adelson E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3, (5):3, 347–368 http://www.journalofvision.org/content/3/5/3, doi:10.1167/3.5.3. [PubMed] [Article] [CrossRef]
Fleming R. W. Torralba A. Adelson E. H. (2004). Specular reflections and the perception of shape. Journal of Vision, 4, (9):10, 798–820, http://www.journalofvision.org/content/4/9/10, doi:10.1167/4.9.10. [PubMed] [Article] [CrossRef]
Fulvio J. M. Singh M. Maloney L. T. (2006). Combining achromatic and chromatic cues to transparency. Journal of Vision, 6, (8):1, 760–776, http://www.journalofvision.org/content/6/8/1, doi:10.1167/6.8.1. [PubMed] [Article] [CrossRef] [PubMed]
Gilchrist A. Jacobsen A. (1984). Perception of lightness and illumination in a world of one reflectance. Perception, 13, 5–19. [CrossRef] [PubMed]
Gilchrist A. L. (2006). Seeing black and white. Oxford, UK: Oxford University Press.
Golz J. MacLeod D. I. (2002). Influence of scene statistics on colour constancy. Nature, 415, 637–640. [CrossRef] [PubMed]
Hartung B. Kersten D. (2002). Distinguishing shiny from matte [Abstract]. Journal of Vision, 2, (7):551, 551a, http://www.journalofvision.org/content/2/7/551, doi:10.1167/2.7.551. [CrossRef]
Heinemann E. G. (1955). Simultaneous brightness induction as a function of inducing and test-field luminances. Journal of Experimental Psychology, 50, 89–96. [CrossRef] [PubMed]
Ho Y. X. Landy M. S. Maloney L. T. (2006). How direction of illumination affects visually perceived surface roughness. Journal of Vision, 6, (5):8, 634–648, http://www.journalofvision.org/content/6/5/8, doi:10.1167/6.5.8. [PubMed] [Article] [CrossRef] [PubMed]
Ho Y. X. Landy M. S. Maloney L. T. (2008). Conjoint measurement of gloss and surface texture. Psychological Science, 19, 196–204. [CrossRef] [PubMed]
Hunter R. S. (1975). The measurement of appearance. New York: Wiley-Interscience.
Jameson D. Hurvich L. M. (1961). Complexities of perceived brightness. Science, 133, 174–179. [CrossRef] [PubMed]
Jensen H. W. (2001). Realistic image synthesis using photon mapping. Massachusetts, USA: A. K. Peters.
Jensen H. W. Marschner S. R. Levoy M. Hanrahan H. P. (2001). A practical model for subsurface light transport. Proceedings of ACM SIGGRAPH 2001 (pp. 511–518).
Kahn E. A. Reinhard E. Fleming R. W. Bülthoff H. H. (2006). Image-based material editing. ACM Transactions on Graphics, 25, 654–663. [CrossRef]
Khang B. G. Zaidi Q. (2002). Accuracy of color scission for spectral transparencies. Journal of Vision, 2, (6):3, 451–466, http://www.journalofvision.org/content/2/6/3, doi:10.1167/2.6.3. [PubMed] [Article] [CrossRef]
Klinker G. J. Shafer S. A. Kanade T. (1987a). Measurement of gloss from color images. Inter-society Color Council ISCC 87 Conference on Appearance ISCC (pp. 9–13).
Klinker G. J. Shafer S. A. Kanade T. (1987b). Using a color reflection model to separate highlights from object color. In Brady J. M. Rosenfeld A. (Eds.), Proceedings of the First International Conference on Computer Vision ICCV (pp. 145–150). London: Computer Society Press.
Land E. H. McCann J. J. (1971). Lightness and retinex theory. Journal of the Optical Society of America, 61, 1–11. [CrossRef] [PubMed]
Metelli F. (1974). The perception of transparency. Scientific American, 230, 90–98. [CrossRef] [PubMed]
Motoyoshi I. Nishida S. Adelson E. H. (2005). Luminance re-mapping for the control of apparent material. Proceedings of APGV 2005 (p. 165).
Motoyoshi I. Nishida S. Sharan L. Adelson E. H. (2007). Image statistics and the perception of surface qualities. Nature, 447, 206–209. [CrossRef] [PubMed]
Nayar S. K. Krishnan G. D. Grossberg M. D. Raskar R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. ACM Transactions on Graphics, 25, 935–944. [CrossRef]
Nishida S. Motoyoshi I. Nakano L. Li Y. Sharan L. Adelson E. H. (2008). Do colored highlights look like highlights? [Abstract]. Journal of Vision, 8, (6):339, 339a, http://www.journalofvision.org/content/8/6/339, doi:10.1167/8.6.339. [CrossRef]
Nishida S. Shinya M. (1998). Use of image-based information in judgments of surface-reflectance properties. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 15, 2951–2965. [CrossRef] [PubMed]
Pont S. C. te Pas S. F. (2006). Material-illumination ambiguities and the perception of solid objects. Perception, 35, 1331–1350. [CrossRef] [PubMed]
Ruppertsberg A. I. Bloj M. (2007). Reflecting on a room of one reflectance. Journal of Vision, 7, (13):12, 1–13, http://www.journalofvision.org/content/7/13/12, doi:10.1167/7.13.12. [PubMed] [Article] [CrossRef] [PubMed]
Shafer S. A. (1985). Using color to separate reflection components. Color Research and Applications, 10, 210–218. [CrossRef]
Sharan L. Li Y. Motoyoshi I. Nishida S. Adelson E. H. (2008). Image statistics for surface reflectance perception. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 25, 846–865. [CrossRef] [PubMed]
Singh M. Anderson B. L. (2002a). Perceptual assignment of opacity to translucent surfaces: The role of image blur. Perception, 31, 531–552. [CrossRef]
Singh M. Anderson B. L. (2002b). Toward a perceptual theory of transparency. Psychological Review, 109, 492–519. [CrossRef]
Tan R. T. Ikeuchi K. (2005). Separating reflection components of textured surfaces from a single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 178–193. [CrossRef] [PubMed]
te Pas S. F. Pont S. C. (2005). Comparison of material and illumination discrimination performance for real rough, real smooth and computer generated smooth spheres. In Proceedings APGV 2005 (pp. 75–81).
Todd J. T. Norman J. F. Mingolla E. (2004). Lightness constancy in the presence of specular highlights. Psychological Science, 15, 33–39. [CrossRef] [PubMed]
Wallach H. (1948). Brightness constancy and the nature of achromatic colors. Journal of Experimental Psychology, 38, 310–324. [CrossRef] [PubMed]
Figure 1
 
Examples of stimuli used in the experiment.
Figure 1
 
Examples of stimuli used in the experiment.
Figure 2
 
Mean translucency ratings of objects as a function of the simulated translucency parameters. The lower abscissa represents the simulated depth of sub-surface scattering, and the upper abscissa represents the simulated amount transparency. Individual lines show the results for each object and illumination. The error bars are ±1 SEM across subjects.
Figure 2
 
Mean translucency ratings of objects as a function of the simulated translucency parameters. The lower abscissa represents the simulated depth of sub-surface scattering, and the upper abscissa represents the simulated amount transparency. Individual lines show the results for each object and illumination. The error bars are ±1 SEM across subjects.
Figure 3
 
Images of objects with different translucencies. The number denotes the mean translucency rating. The leftmost image shows an opaque object. The upper images are the original images, and the lower ones show the image generated with no specular reflection. The insets show the joint luminance histograms between an opaque object (abscissa) and the corresponding translucent object (ordinate).
Figure 3
 
Images of objects with different translucencies. The number denotes the mean translucency rating. The leftmost image shows an opaque object. The upper images are the original images, and the lower ones show the image generated with no specular reflection. The insets show the joint luminance histograms between an opaque object (abscissa) and the corresponding translucent object (ordinate).
Figure 4
 
(a) RMS contrast of the non-specular component of the image (see lower panels in Figure 3) plotted as a function of the mean translucency rating. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray. The error bar is ±1 SEM. Thick lines are the traces of the average. The circles represent the data obtained for objects with various sub-surface scattering depths, and the squares represent the data for objects with various transparencies (see Figure 2). (b) Regression coefficient in the non-specular component image between opaque and translucent objects. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray.
Figure 4
 
(a) RMS contrast of the non-specular component of the image (see lower panels in Figure 3) plotted as a function of the mean translucency rating. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray. The error bar is ±1 SEM. Thick lines are the traces of the average. The circles represent the data obtained for objects with various sub-surface scattering depths, and the squares represent the data for objects with various transparencies (see Figure 2). (b) Regression coefficient in the non-specular component image between opaque and translucent objects. Data for different spatial frequency bands are plotted in different colors; the data for pixel luminance are plotted in gray.
Figure 5
 
Examples of stimuli used in the second experiment. Each image was made by multiplying the non-specular component of an opaque object at the bottom left with variable factors from 1.0 to −0.6, separately for high (column) and low (row) spatial frequency bands. Here the images with factors 1.0 (original), 0.3 (contrast reduced), and −0.3 (contrast reversed) are shown.
Figure 5
 
Examples of stimuli used in the second experiment. Each image was made by multiplying the non-specular component of an opaque object at the bottom left with variable factors from 1.0 to −0.6, separately for high (column) and low (row) spatial frequency bands. Here the images with factors 1.0 (original), 0.3 (contrast reduced), and −0.3 (contrast reversed) are shown.
Figure 6
 
Translucency rating of objects as functions of the contrast of the non-specular component at high (abscissa) and low (ordinate) spatial frequencies. The upper panel shows the results for Buddha, and the lower panel shows the results for Lion.
Figure 6
 
Translucency rating of objects as functions of the contrast of the non-specular component at high (abscissa) and low (ordinate) spatial frequencies. The upper panel shows the results for Buddha, and the lower panel shows the results for Lion.
Figure 7
 
Schematic illustrations of 1D intensity distributions of the images of (a) opaque and (b, c) translucent surfaces. The solid lines represent the intensity of the overall image including highlights; the red lines represent the intensity of the non-specular component. The distribution in pixels is shown on the left, and that in a mid/high spatial frequency sub-band is shown in the middle. The typical shape of the sub-band histogram is shown on the right. (d) An image of a glossy opaque surface. (e) For a high spatial frequency band, the positive contrast is squared to make it spatially sparse, and the negative contrast is greatly reduced (×0.1). The surface appears translucent. The right panels in (d) and (e) show the histograms for the high spatial frequency band.
Figure 7
 
Schematic illustrations of 1D intensity distributions of the images of (a) opaque and (b, c) translucent surfaces. The solid lines represent the intensity of the overall image including highlights; the red lines represent the intensity of the non-specular component. The distribution in pixels is shown on the left, and that in a mid/high spatial frequency sub-band is shown in the middle. The typical shape of the sub-band histogram is shown on the right. (d) An image of a glossy opaque surface. (e) For a high spatial frequency band, the positive contrast is squared to make it spatially sparse, and the negative contrast is greatly reduced (×0.1). The surface appears translucent. The right panels in (d) and (e) show the histograms for the high spatial frequency band.
Supplementary Material
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×