Free
Research Article  |   November 2010
Effects of head motion and stereo viewing on perceived glossiness
Author Affiliations
Journal of Vision November 2010, Vol.10, 15. doi:10.1167/10.9.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Yuichi Sakano, Hiroshi Ando; Effects of head motion and stereo viewing on perceived glossiness. Journal of Vision 2010;10(9):15. doi: 10.1167/10.9.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Many of the previous studies on glossiness perception have focused on glossiness from a single stimulus image. However, the essence of glossiness perception should be the estimation of the surface reflectance properties, which can be estimated computationally from luminance obtained at multiple viewpoints. Thus, the human visual system could also compute glossiness based on retinal images at different eye locations, which are caused by the observer's head motion and stereo viewing. We found that perceived glossiness is strongly enhanced by temporal changes of the retinal image caused by the observer's head motion and image differences between the two eyes in stereo viewing. These findings suggest that the human visual system utilizes rational methods for the perception of surface glossiness. Our data also suggest that the combination of multiple retinal images plays an important role in glossiness perception, just as it is assumed to do in 3D shape perception (i.e., 3D shape perception from binocular disparity and that from motion parallax).

Introduction
We are usually surrounded by many objects, which may appear glossy or matte, such as objects made of metal, plastic, or paper. How can we perceive the glossiness of such objects? 
Many previous studies have focused on the glossiness perceived from a single stimulus image (Anderson & Kim, 2009; Beck & Prazdny, 1981; Berzhanskaya, Swaminathan, Beck, & Mingolla, 2005; Ferwerda, Pellacini, & Greenberg, 2001; Fleming, Dror, & Adelson, 2003; Kim & Anderson, 2010; Motoyoshi, Nishida, Sharan, & Adelson, 2007; Nagata, Okajima, & Osumi, 2007; Pellacini, Ferwerda, & Greenberg, 2000). However, relying only on a single-image-based cue (i.e., monocular static cue) to glossiness can lead to misestimation of glossiness. For instance, Motoyoshi et al. (2007) claimed that for glossiness perception, the human visual system may exploit the skewness of the luminance histogram, which is a simple statistical measure derived from a single image. However, such a statistical measure does not always correlate with the actual surface reflectance properties (Anderson & Kim, 2009). Similarly, although it is well known that a specular highlight is an important cue to glossiness (Beck & Prazdny, 1981; Berzhanskaya et al., 2005), a matte surface with a highlight-like texture in a single image appears glossy (Hartung & Kersten, 2002). Therefore, a cue in a single image produces a wrong estimate of glossiness. Nevertheless, we do not usually face such problems in our daily lives. Then, how does the human visual system overcome such misestimation problems? 
Here we find that the combination of multiple retinal images at different eye locations can produce information to estimate glossiness, and demonstrate that the human visual system uses such information for glossiness perception. Thus, even if the glossiness is misestimated from the single-image-based cue, by integrating it with a cue based on multiple images, it is assumed to minimize the effects of the misestimation. 
First of all, glossiness can be a perception that corresponds to the estimation of the surface reflectance properties, or more specifically, how the reflectance of the surface changes with the reflection direction. That is, if the surface reflectance does not change with the reflection direction, then the surface can appear to be matte (i.e., no glossiness). In contrast, if the reflectance changes strikingly, the surface can appear to be very glossy. 
Given this characteristic, the human visual system could use the multiple retinal images at different eye locations and compare them for the perception of surface glossiness, since eyes at different locations generally receive different rays reflected to different directions from the surface. Theoretically, from multiple retinal images, it is possible to disambiguate whether or not a surface has a uniform reflectance across reflection directions, which corresponds to a completely matte surface. That is, if there is a luminance difference between two viewpoints, the point on the surface has a certain magnitude of variation in reflectance (i.e., reflectance is not uniform) across reflection directions (Figure 1). Moreover, we found that in this situation, with certain assumptions it is theoretically possible to estimate that the magnitude of the reflectance changes across the reflection directions is greater than a certain value that can be estimated (see 1 for details). On the other hand, if there are no luminance differences between the two viewpoints at any point on a surface that has a variety of surface orientations, the surface could have a uniform reflectance across reflection directions (see 1 for details). 
Figure 1
 
Reflectance, head motion, and stereo viewing. (A) Head motion and reflectances of (top) a glossy surface and (bottom) a matte surface. When the observer's eye is looking in the direction of the specular reflection of a glossy surface, the luminance of the surface point is high. Once the eye moves out of the specular direction, luminance decreases immediately. In contrast, when viewing a matte surface, the luminance of the surface is unaffected by the location of the eye. Thus, it is theoretically possible to distinguish a glossy surface from a matte one. (B) Stereo viewing and reflectances of (top) a glossy surface and (bottom) a matte surface. The luminance of the surface for the eye looking in the direction of the specular reflection of the glossy surface is much higher than that for the other eye not looking in the specular direction. In contrast, when viewing a matte surface, the luminance of the surface for one eye is equal to that for the other eye. Thus, it is theoretically possible to distinguish the glossy surface from the matte one.
Figure 1
 
Reflectance, head motion, and stereo viewing. (A) Head motion and reflectances of (top) a glossy surface and (bottom) a matte surface. When the observer's eye is looking in the direction of the specular reflection of a glossy surface, the luminance of the surface point is high. Once the eye moves out of the specular direction, luminance decreases immediately. In contrast, when viewing a matte surface, the luminance of the surface is unaffected by the location of the eye. Thus, it is theoretically possible to distinguish a glossy surface from a matte one. (B) Stereo viewing and reflectances of (top) a glossy surface and (bottom) a matte surface. The luminance of the surface for the eye looking in the direction of the specular reflection of the glossy surface is much higher than that for the other eye not looking in the specular direction. In contrast, when viewing a matte surface, the luminance of the surface for one eye is equal to that for the other eye. Thus, it is theoretically possible to distinguish the glossy surface from the matte one.
In the real world, such different eye locations can be caused by both dynamic changes in the position of the observer's head and stereo viewing. Dynamic changes in head position cause temporal changes in the retinal image, including motion parallax and temporal changes in luminance (Figure 1A). We call these temporal image changes temporal cues. Similarly, when viewing a glossy object binocularly, there are image differences between the two eyes, including binocular disparity and difference in luminance (Figure 1B). We call these image differences binocular cues. 
Then, does the human visual system use the temporal and the binocular cues for glossiness perception? In this study, we examined whether these cues enhance perceived glossiness of a stimulus in which glossiness could be underestimated in the absence of these cues (i.e., based only on monocular static cues). We used a stimulus surface that was composed of surface facets with different orientations. On each surface facet, luminance was spatially uniform even when it reflected light not only diffusely but also specularly. Therefore, we assumed that in the absence of the temporal and the binocular cues, the glossiness of the surface might be underestimated, because a specular highlight has typically luminance gradient. On the other hand, if the visual system uses the temporal and the binocular cues, when these cues were available, it was expected that the magnitude of the reflectance changes across the reflection directions was estimated to be greater than a certain value. Thus, as a result, it was expected that perceived glossiness could be higher when these cues were available than when they were not. 
General methods
Subjects
One author and three subjects naive to the purpose of the experiments participated in this study. All had normal or corrected-to-normal visual acuity. 
Apparatus
All experiments were conducted in a darkened room. The stimuli were presented on a 19-inch CRT monitor (39.5 cm × 29.6 cm, Mitsubishi Diamondtron M2 RDF223G) refreshed at 120 Hz and viewed through stereo shutter goggles (Crystal Eyes 3, StereoGraphics; Figure 2A). To minimize cross talk between the images presented to the two eyes, we used only red phosphor on the monitor, which was comparatively faster. The minimum and maximum luminances measured through the stereo goggles were less than 0.001 cd/m2 and 1.99 cd/m2, respectively. Head motion was tracked during the experiments using an infrared (IR) position sensor (Optotrak 3020 System, Northern Digital). 
Figure 2
 
Methods of the experiments. (A) The experimental apparatus. The IR emitter fixed to the stereo goggles was tracked by the IR position sensor during the experiments so that the stimulus presented on the CRT could be changed depending on the subject's head position in real time. (B) An example of the stimulus used in the experiments (here reduced in size). (C) The dynamic and static stimuli used in Experiment 1. The dynamic stimulus changed temporally in luminance and in 2D shape on the monitor depending on the observer's head position, simulating a stationary surface in a 3D space. The static stimulus did not change over time on the 2D computer monitor. (D) The stereo and non-stereo stimuli used in Experiment 2. The non-stereo stimulus presented to the left and right eyes was what should be seen from the midpoint of the eyes.
Figure 2
 
Methods of the experiments. (A) The experimental apparatus. The IR emitter fixed to the stereo goggles was tracked by the IR position sensor during the experiments so that the stimulus presented on the CRT could be changed depending on the subject's head position in real time. (B) An example of the stimulus used in the experiments (here reduced in size). (C) The dynamic and static stimuli used in Experiment 1. The dynamic stimulus changed temporally in luminance and in 2D shape on the monitor depending on the observer's head position, simulating a stationary surface in a 3D space. The static stimulus did not change over time on the 2D computer monitor. (D) The stereo and non-stereo stimuli used in Experiment 2. The non-stereo stimulus presented to the left and right eyes was what should be seen from the midpoint of the eyes.
Stimuli
The stimulus was a computer-generated display that was composed of a glossy and bumpy surface, a grid pattern representing the ground, and a small square moving back-and-forth laterally at the bottom of the display, which guided the subject's head motion (Figure 2B; Supplementary Figures 1 and 2). The position of the square was modulated sinusoidally over time with a frequency of 0.5 Hz and an end-to-end movement distance of 19.8 cm. 
The bumpy surface was elaborated as follows. First, a flat square (19.8 cm × 19.8 cm) in the frontal plane was divided into 1600 small squares. Second, each small square was divided into two triangles. Third, each vertex of the triangles was assigned a random pedestal depth ranging between 1.23 mm close and 1.23 mm far from the subject. Finally, the whole bumpy surface was then slanted 45 deg with its top away from the subject around the middle horizontal axis. 
The luminance intensity of each surface facet (i.e., triangle) was determined based on the Phong lighting model (Nishida & Shinya, 1998; Phong, 1975): 
I = I i n R d cos θ + I i n R s cos n α + I a R a ,
(1)
where I is the intensity of the respective facet, I in is the intensity of incident light (1.0), I a is the intensity of ambient light (1.0), R d is the diffuse reflection (0.4), R s is the specular reflectance, R a is the ambient reflection (0.2), n is the index determining the angular extent of the specular component of the reflection (128), θ is the angle of incidence, and α is the angle between the direction of reflection and the direction to the viewpoint. All corresponding Phong parameters had the same values among all facets across the whole surface. In other words, the whole surface was homogeneous. Note that the luminance intensity was uniform across each surface facet. The surface was simulated to be illuminated by parallel light from the above. 
Head motion and viewing conditions
In the conditions to move the head, the subjects did so in a way that the heads kept staying just in front of the small square that moved back-and-forth laterally, otherwise they did not move the heads right in front of the center of the monitor (Figure 2C). 
In all the experiments, the subjects viewed the stimulus through the stereo goggles. In Experiment 1 only, one of the eyes was occluded by a piece of opaque black paper attached to the front face of the goggles so that the subjects viewed the stimulus monocularly (Figure 2C). 
Experiment 1: Effects of retinal image changes due to head motion
In Experiment 1, we examined whether temporal changes in the retinal image caused by the lateral head motion of the observer enhance perceived glossiness. 
Procedure and tasks
The dynamic and static stimuli were presented alternately for 3.87 s each, with a blank interval of 0.13 s between them (Figure 2C; Supplementary Figure 1). During the blank interval, only the ground grid pattern and the moving small square were presented. The subjects moved their heads laterally back-and-forth while observing the dynamic stimulus. They stayed still in front of the monitor when observing the static stimulus. 
To compare the perceived glossiness of the two stimuli, we used two tasks in different trials. First, the subject reported the perceived glossiness of the static stimulus, giving a number based on the assumption that the glossiness of the dynamic stimulus was ten, where all the corresponding Phong parameters had the same values for the static and dynamic surfaces (rating task; magnitude estimation method). Zero meant no perceived glossiness (completely matte). The range of the glossiness was not limited from 0 to 10, but from 0 to infinity. The value of R s was 0.08 for the two stimuli. The combination of this R s value and the n value used corresponds to 8.00 gu in the specular gloss at 60° (ASTM D523-08, 2008) after the diffuse correction, if the surface with R s of 1.0, R d of 0.0, and n of 128 is used as the standard. 
Second, the subject pressed the keys to adjust the specular reflectance (R s of the Phong model) of the static stimulus so that the glossiness of the static and dynamic stimuli appeared equal (matching task; method of adjustment). The value of the specular reflectance of the dynamic stimulus was 0.08 while that of the static stimulus was randomly assigned at the beginning of each trial and was then manipulated by the subjects. Each task was repeated ten times in each experiment. 
Results
The glossiness under the static condition was systematically underestimated compared to the dynamic condition (Figure 3; t 9 > 8.30; p < 0.0001 in each subject). In addition, the static stimulus required higher specular reflectance than the dynamic stimulus for the same perceived glossiness (Figure 3; t 9 > 5.32; p < 0.0005 in each subject). 
Figure 3
 
Results of all the experiments. (A) Results averaged across the subjects. The results of the rating tasks are shown in the top panel. The dashed line indicates the glossiness of the dynamic (in Experiments 1 and 3) and stereo (in Experiments 2 and 4) stimuli, which equals ten. Results of the matching tasks are shown in the bottom panel. The dashed line indicates the equality level of the specular reflectance of the dynamic and static stimuli in Experiments 1 and 3, and the stereo and non-stereo stimuli in Experiments 2 and 4. Error bars indicate ±1 SEM. (B) Results of the individual subjects. A pair of asterisks and a single asterisk mean statistically significant differences between the data levels (bars) and the levels of the dashed lines in terms of p values of less than 0.01 and 0.05, respectively (one-tailed t-test). Out of 56 bars, 54 bars were significantly different from the dashed lines.
Figure 3
 
Results of all the experiments. (A) Results averaged across the subjects. The results of the rating tasks are shown in the top panel. The dashed line indicates the glossiness of the dynamic (in Experiments 1 and 3) and stereo (in Experiments 2 and 4) stimuli, which equals ten. Results of the matching tasks are shown in the bottom panel. The dashed line indicates the equality level of the specular reflectance of the dynamic and static stimuli in Experiments 1 and 3, and the stereo and non-stereo stimuli in Experiments 2 and 4. Error bars indicate ±1 SEM. (B) Results of the individual subjects. A pair of asterisks and a single asterisk mean statistically significant differences between the data levels (bars) and the levels of the dashed lines in terms of p values of less than 0.01 and 0.05, respectively (one-tailed t-test). Out of 56 bars, 54 bars were significantly different from the dashed lines.
Control experiment
In this experiment, there were differences between the dynamic and static conditions not only in the stimulus characteristics (dynamic or static) but also in the head motion condition (head motion or no head motion). This raises the possibility that the difference in perceived glossiness between the two stimuli was attributable to a difference in the head motion condition. To dissociate the effect of temporal changes of the retinal image due to head motion from the effect of head motion itself, in the control experiment, the subjects moved their heads when observing the static stimulus as well as when observing the dynamic stimulus. The results were very similar to those of the main experiment (Figure 3). The descriptions of the results remain true in the rating task (t 9 > 10.9; p < 0.0001 in each subject) and the matching task (t 9 > 2.38; p < 0.05 in each subject). 
Summary
The results of the main and control experiments suggest that dynamic changes of the retinal image caused by lateral head motion of the observer, but not the head motion itself, enhance perceived glossiness. 
Experiment 2: Effects of stereo viewing
In Experiment 2, we examined whether image differences between the two eyes in stereo viewing enhance perceived glossiness. 
Procedure and tasks
The stereo stimulus and the non-stereo stimulus were alternately presented (Figure 2D; Supplementary Figure 2). The subjects stayed still in front of the monitor during the experiment. In the rating trials, the subjects reported the perceived glossiness of the non-stereo stimulus, giving a number based on the assumption that the glossiness of the stereo stimulus was ten, where the corresponding Phong parameters had the same values for the static and dynamic surfaces (rating task; magnitude estimation method). The value of the specular reflectance was 0.08 in the two stimuli. In the matching trials, the subject pressed the keys to adjust the specular reflectance of the non-stereo stimulus so that the glossiness of the non-stereo and stereo stimuli appeared equal. The value of the specular reflectance of the stereo stimulus was 0.08 while that of the non-stereo stimulus was randomly assigned at the beginning of each trial, then it was manipulated by the subjects. 
Results
The glossiness under the non-stereo condition was systematically underestimated compared to the stereo condition (Figure 3; t 9 > 6.70; p < 0.0001 in each subject). In addition, the non-stereo stimulus required higher specular reflectance than the stereo stimulus for the same perceived glossiness (Figure 3; t 9 > 2.58; p < 0.05 in each subject). 
Additional experiment
The results were very similar even when the subjects moved their heads during the experiment in which the stereo and non-stereo stimuli did not change over time on the 2D monitor (Figure 3). The descriptions of the results remain true in the rating task (t 9 > 8.50; p < 0.0001 in each subject) and the matching task (t 9 > 2.50; p < 0.05 in each subject). 
Summary
The results of the main and additional experiments suggest that image differences between the two eyes enhance perceived glossiness. 
Experiment 3: Effects of retinal image changes when stereo viewing
In Experiment 3, we examined whether temporal changes of the retinal image caused by the observer's head motion enhance perceived glossiness, even when the glossiness has already been enhanced by the image differences between the two eyes. The experimental conditions were the same as those of Experiment 1 except that both the dynamic and static stimuli were presented stereoscopically. As in Experiment 1, the glossiness under the static condition was systematically underestimated compared to the dynamic condition (Figure 3; t 9 > 4.29; p < 0.005 in each subject). In addition, the static stimulus required higher specular reflectance than the dynamic stimulus for the same perceived glossiness (t 9 > 2.05; p < 0.05 in each of three subjects, t 9 = 0.605; p > 0.1 in one subject). The results were very similar even in the case that the subjects moved their heads when observing the static stimulus as well as when observing the dynamic stimulus (the control experiment; see Figure 3). The descriptions of the results remain true in the rating task (t 9 > 7.66; p < 0.0001 in each subject) and in the matching task (t 9 > 2.51; p < 0.05 in each subject). These results suggest that even when the glossiness has already been enhanced by image differences between the two eyes, temporal changes in the retinal image caused by the observer's head motion enhance the perceived glossiness. 
However, the magnitude of increase in glossiness was smaller in Experiment 3 than in Experiment 1 (Figure 3, main experiments; t 18 > 3.97; p < 0.0005 in each subject). Similarly, the differences in the value of specular reflectance were smaller in Experiment 3 than they were in Experiment 1 (Figure 3, main experiments; t 18 > 3.88; p < 0.0005 in each subject). These results suggest that when the binocular cue is already available, the effects of adding the temporal cue are attenuated. 
Experiment 4: Effects of stereo viewing when retinal image changes due to head motion
In Experiment 4, we examined whether image differences between the two eyes in stereo viewing enhance the perceived glossiness even when the glossiness has already been enhanced by temporal changes in the retinal image caused by the observer's head motion. The experimental conditions were the same as those of Experiment 2, except that during the experiment, the subjects moved their heads back-and-forth laterally, and both the stereo and non-stereo stimuli changed temporally in luminance and in 2D shape on the monitor, depending on the observer's head position. As in Experiment 2, glossiness under the non-stereo condition was systematically underestimated compared to the stereo condition (Figure 3; t 9 > 6.41; p < 0.0001 in each subject). In addition, the non-stereo stimulus required higher specular reflectance than the stereo stimulus for the same perceived glossiness (t 9 > 6.91; p < 0.0001 in each of three subjects, t 9 = 1.43; p = 0.0938 in one subject). These results suggest that even when the glossiness has already been enhanced by the temporal changes of the retinal image caused by the observer's head motion, image differences between the two eyes enhance the perceived glossiness. 
However, the magnitude of increase of glossiness was smaller in Experiment 4 than in Experiment 2 (Figure 3, main experiments; t 18 > 7.28; p < 0.0001 in each of three subjects, t 18 = 0.882; p = 0.195 in one subject). Similarly, the differences in the value of specular reflectance were smaller in Experiment 4 than those in Experiment 2 (Figure 3, main experiments; t 18 > 7.98; p < 0.0001 in each of three subjects, t 18 = 0.306; p = 0.382 in one subject). These results suggest that when the temporal cue is already available, the effects of adding the binocular cue are attenuated. 
Discussion
Effects of temporal changes of the retinal image due to head motion on glossiness
In Experiment 1, we found that temporal changes of the retinal image caused by the head motion of the observer enhance perceived glossiness. In Experiment 3, we also found that this holds true even when the glossiness has already been enhanced by the image differences between the two eyes. As far as we know, there has been no experimental data showing that temporal changes of the retinal image caused by the observer's head motion enhance perceived glossiness, although the phenomenon was briefly described by Helmholtz (1867/2005) and Hering (1874/1964). 
More recently, it has been reported that a square flickering at 16 Hz between light and dark appears to be lustrous (Anstis, 2000). This lustrous impression has been reported to be strong when the light and the dark values straddle the surround luminance. This lustrous impression, which the author called “monocular luster,” is similar to the effect of the retinal image changes due to head motion reported in the present study in that the luminance of the stimulus changes. Similarly, Burr, Ross, and Morrone (1986) reported that contrast reversal of horizontal and vertical plaid patterns appears to be lustrous. There are several differences in stimulus condition between the two previous studies and ours. First, the frequencies of the luminance change of the experiments in Anstis (16 Hz) and Burr et al. (4–30) are somewhat higher than ours (1.0 Hz when averaged in the temporal domain). Attempt to extrapolate the contrast threshold for lustrous appearance at 1.0 Hz from the graph of Burr et al., and it corresponds to sub-threshold, meaning little or no lustrous appearance. Second, our stimulus had many surface facets the luminance of which changed at different points of time, as well as motion parallax cue to 3D structure of the whole surface. It is unlikely, then, that the two previous studies' phenomena and ours are identical. However, it might be important to note that the data of Burr et al. could be related to the physics of surface reflectance properties. That is, according to their data, contrast sensitivity for lustrous appearance is low when temporal frequency is low. This tendency corresponds to the fact that when the value of n of the Phong model (Phong, 1975) is small, the reflectance does not change sharply with the reflection direction, which would correspond to low glossiness. Moreover, according to Burr et al.'s data, for a certain temporal frequency, when the contrast increases, the surface appearance changes from matte to glossy. This tendency corresponds to the fact that when the specular reflection of the same model increases, the change in reflectance with the reflection direction increases, which would correspond to an increase in glossiness. Although Burr et al. explained the phenomenon by the rivalrous interaction of motion detectors tuned to opposing directions of motion, this phenomenon also seems to be related to the physics of surface reflectance properties. 
It has also been suggested that the motion of the specular highlights and that of the reflected images on a smooth surface affect perceived glossiness (Hartung & Kersten, 2002; Hurlbert, Cumming, & Parker, 1991a, 1991b). For instance, Hurlbert et al. (1991b) reported that most observers perceived an ellipsoid on which the specular highlight moved relative to the texture as glossier than those with a “stuck-on” highlight. Thus, one might suppose that the differences in perceived glossiness between the static and dynamic stimuli used in Experiments 1 and 3 arose because of motion parallax between the specular highlights and the surface. However, our stimulus did not contain motion parallax between specular highlights and the surface, because the luminance was spatially uniform across each facet of the stimulus surface we used and thus there were no highlight peaks or boundaries to define highlight motion relative to the surface. Rather, in the previous studies' stimuli, there were luminance changes. Such luminance changes could have affected the perceived glossiness. 
We found the effects of temporal changes of the retinal image due to the observer's head motion on glossiness perception. However, not only self-motion but also the object's motion and changes in light source direction produce temporal changes of the retinal image. That is, the change in the relative geometric relations between the eye location, the surface direction, and the light source direction causes the retinal image changes that could produce information about how the reflectance changes with the reflection direction, which should be strongly related to glossiness. The investigation on computational background and the psychophysical investigation about such relationships would be further required to clarify human gloss perception. 
Effects of image differences between the two eyes on glossiness
In Experiment 2, we found that image differences between the two eyes in stereo viewing enhance perceived glossiness. In Experiment 4, we also found that this holds true even when the glossiness has already been enhanced by the temporal changes of the retinal image caused by the observer's head motion. 
It has been reported that the binocular disparity between the specular highlight and the smooth surface affects perceived glossiness (Blake & Bulthoff, 1990; Hurlbert et al., 1991a; Obein, Knoblauch, & Vienot, 2004; Obein, Pichereau et al., 2004; Wendt, Faul, & Mausfeld, 2008). Although specular reflection typically produces highlights on a smoothly curved surface, our stimulus did not contain highlight disparities because the surface we used was composed of lots of flat surface facets with different orientations each of which had uniform luminance and thus no highlight peaks or boundaries to define highlight disparity. 
Rather, luminance differences between the eyes may have played an important role in enhancing glossiness in our experiments. It is assumed that luminance differences between the eyes give the surface a lustrous impression (Brewster, 1852, 1855, 1861; Dove, 1850; Helmholtz, 1867/2005; Hering, 1874/1964; Howard, 1995; Ludwig, Pieper, & Lachnit, 2007; Paille, Monot, Dumont-Becle, & Kemeny, 2001; Pieper & Ludwig, 2001, 2002; Preston, 1931; Tyler, 2004; but see also Anstis, 2000, who found that a reversal of contrast polarities is crucial for lustrous appearance). This phenomenon is called “binocular luster” or “stereoscopic luster” (Helmholtz, 1867/2005). We showed that the combination of the luminance differences between the eyes and disparity distribution of the surface enhances perceived glossiness produced by monocular static cues (in Experiment 2) and that produced by the combination of the monocular static and temporal cues to gloss (in Experiment 4). 
Then, are the luminance differences between the two eyes unrelated to the highlight disparity? We think not. Although in our stimulus, these two cues did not coexist, generally, if the surface is smooth and large enough, they can coexist. Rather, what has been assumed to be the effects of highlight disparity on glossiness could actually be the effects of the luminance differences between the two eyes at the corresponding points. It could be useful to demonstrate the effects of highlight disparity in a situation in which luminance differences between the two eyes are not available. 
How did the temporal cue and the binocular cue enhance perceived glossiness?
Although many previous studies have reported the effect of each single cue to glossiness (Anstis, 2000; Beck & Prazdny, 1981; Berzhanskaya et al., 2005; Blake & Bulthoff, 1990; Hartung & Kersten, 2002; Hurlbert et al., 1991a, 1991b; Motoyoshi et al., 2007; Wendt et al., 2008), little is known about how multiple cues interact or are integrated for the perception of surface glossiness. Here we try to explain the results of the experiments in the present study using some simple models of interactions and integration of the glossiness cues. There seem to be at least two models that can explain the data of the experiments. 
The first model is disambiguation, which is well known for explaining certain phenomena in depth perception from multiple depth cues (Howard & Rogers, 1995, 2002). This model is effective when information estimated by one cue is ambiguous. In this model, the ambiguous information estimated by one cue is disambiguated by other cues. Here, the ambiguous information is the origins of the luminance distribution on the stimulus surface. 
Luminance distribution on an object surface is typically composed of the following three factors: lightness distribution (i.e., texture), distribution of the incident angles or distribution of the local surface orientations causing it (i.e., shading), and the distribution of the angles between the direction of reflection and the direction to the viewpoint (i.e., distribution of specularly reflecting areas that include specular highlights on a smooth surface). To perceive surface reflectance properties, including lightness and glossiness, the visual system has to estimate the ratio of the effects of these three factors (Preston, 1931). 
According to the disambiguation model, in the static stimulus of Experiment 1, the effects of the last (specular) factor might have been underestimated, because of the absence of any luminance gradient of specular highlight, which would have resulted in underestimated glossiness. On the other hand, in the dynamic stimulus of the same experiment, luminance of the surface changed temporally in concordance with the motion parallax of the surface. According to the disambiguation model, the concordant change would raise the possibility that the luminance distribution on the surface was partially due to the specularly reflecting areas and would have enhanced the relative ratio of the effects of the specularly reflecting areas on the luminance distribution estimated by the visual system, resulting in higher perceived glossiness than in the static stimulus. Similarly, in Experiment 3, the perceived glossiness of the dynamic stimulus would have been larger than that of the static stimulus for the same reason. Comparing the two experiments, the magnitude of increase of glossiness can be smaller in Experiment 3 than in Experiment 1, because in the static stimulus of Experiment 3, the estimated effects of the last (specular) factor on the luminance distribution would have already been enhanced by the binocular cue. 
The second model is linear combination (i.e., weighted averaging; Landy, Maloney, Johnston, & Young, 1995; Maloney & Landy, 1989, for example). This model is well known for explaining many phenomena in depth perception from multiple depth cues (Landy et al., 1995). In this model, first, glossiness is estimated using each glossiness cue independently. Then, the magnitudes of the estimated glossiness are averaged, assigning weights to them. 
According to the linear combination model, in the static stimulus of Experiment 1, the existing glossiness cues were monocular static cues that included the luminance of specularly reflecting areas and the luminance contrast between specularly reflecting areas and other areas (Ferwerda et al., 2001; Hunter & Harold, 1987; Pellacini et al., 2000). Since the stimuli used in the present study did not contain a luminance gradient of the specular highlight on it (for more details, see Introduction section and Stimuli section), the glossiness specified by the monocular static cue would be underestimated. Assuming that the glossiness estimated by the monocular static cue was lower than that estimated by the temporal cue and that estimated by the binocular cue, a linear combination model with certain cue weights can explain the results, because the perceived glossiness of the dynamic stimulus of Experiment 1 would have been larger than that of the static stimulus because of the higher glossiness estimated by the temporal cue. Similarly, in Experiment 3, the perceived glossiness of the dynamic stimulus would have been larger than that of the static stimulus for the same reason. Comparing the two experiments, the magnitude of the increase in glossiness can be smaller in Experiment 3 than in Experiment 1 with certain cue weights, because the glossiness estimated by the binocular cue is assumed to be higher than that specified by the monocular static cues. 
We presume that linear combination is a possible candidate as a numerical expression of disambiguation, rather than viewing the linear combination and disambiguation as being in conflict or as alternatives. 
Is it reasonable that the temporal cue and the binocular cue affect perceived glossiness? If the linear combination model or the disambiguation model is valid as an explanation of the results in the present study, it would be reasonable, because the magnitude of variation in reflectance across reflection directions can be misestimated from monocular static cues, which exists in a single image, since the task of the estimation is an ill-posed problem (Dror, Adelson, & Willsky, 2001). The two cues may correct the misestimated magnitude. It is theoretically possible to estimate the magnitude of the reflectance variation based on luminance obtained at different viewpoints (see 1 for details). Thus, perceived glossiness may become more accurate (i.e., approach the veridical value). 
Does the human visual system disambiguate surface properties in a binary fashion?
A few previous studies found that an object on which the specular highlight moves relative to the object surface appears glossier than those with a “stuck-on” highlight (Hartung & Kersten, 2002; Hurlbert et al., 1991a, 1991b). This phenomenon can be explained by disambiguating whether the highlight-like feature is really a highlight or not (i.e., texture or shading). This style of disambiguation is “binary,” because the choice would be whether glossy or matte. 
However, a binary disambiguation cannot explain the effects of the binocular cue found in Experiment 3, because with the binary disambiguation alone, in Experiment 2, the binocular cue should have clarified that the stereo surface was glossy, and then in Experiment 3, the dynamic surface cannot be glossier than the static surface, because the static surface should have already been glossy in a binary fashion due to the disambiguation by the binocular cue. Rather, as explained above, we presume that disambiguation might occur in a domain of the relative magnitudes of the effects of different types of reflection (i.e., specular and diffuse) on surface luminance (Preston, 1931), and that disambiguation works in a more continuous fashion. 
Although our data do not rule out the existence of a binary disambiguation, they suggest that a more continuous type of disambiguation occurs in the process of computing glossiness. 
Task validity in terms of possible asymmetry due to limitation in range available for response
Fleming et al. (2003) reported that in their matching experiment, in which the subjects matched a pair of differently illuminated surfaces with 10 levels of specular reflectance, the full range of specular reflectance of the fixed stimulus was matched with the smaller range of adjusted stimulus. This phenomenon shows that there is an asymmetry in the matching of surface reflectance properties, depending on which condition is adjusted and which is fixed. However, this asymmetry is likely to be due to a tendency (i.e., response bias) for subjects to “avoid the highest values on the scale” (Fleming et al., 2003). Doerschner, Boyaci, and Maloney (2010) also reported a similar response bias causing asymmetry in their pilot experiment employing the method of adjustment. 
Did asymmetry affect our data? In the matching experiments of the present study, the values of specular reflectance available for adjustment ranged from 0.0 to 1.0, while the maximum specular reflectance actually used for matching was 0.52, which was not close to the available maximum (1.0) but was almost in the middle of the available range. Thus, such an asymmetry is unlikely to have affected our matching data. Similarly in the rating experiment, the available rating scale ranged from 0 to infinity, not to 10. Thus, our rating data are also unlikely to have been affected by such an asymmetry. 
Similarities between the temporal cue and the binocular cue
In Experiments 1 and 2, from the subjective reports, we found that the appearance of gloss produced by the binocular cue (image differences between the eyes) is similar to that produced by the monocular temporal cue. 
It has been reported that the subjective appearance of depth impression of a random-dot pattern produced by binocular disparity is similar to that produced by motion parallax (Helmholtz, 1867/2005; Howard & Rogers, 1995, 2002; Rogers & Graham, 1979, 1982; Wheatstone, 1838). Depth perception based on binocular disparity and that based on motion parallax have considerable similarities (Howard & Rogers, 2002), including the contributing cortical area (area MT; DeAngelis, Cumming, & Newsome, 1998; Nadler, Angelaki, & DeAngelis, 2008; Uka & DeAngelis, 2003, 2004, 2006), spatial frequency dependency of depth detection threshold (Rogers & Graham, 1982), and the task of the mechanism (Rogers & Graham, 1982). 
Glossiness perception based on temporal cues and that based on binocular cues reported here also have a similarity in those tasks. For the mechanism based on temporal and binocular cues, the task is to detect the difference in luminance (and possibly in positions as well) over time during head motion and between the two eyes, respectively. The monocular luster and the binocular luster are also known to have similarities in the effect of polarity reversal (Anstis, 2000). However, clarifying the relationship between the two mechanisms to see glossiness based on the two cues reported in this study will require further study. 
Effects of mesopic vision
In the present experiments, the maximum luminance of the display measured through the stereo goggles was 1.99 cd/m2, which is within of the range of mesopic vision. Such a low luminance was not originally intended but was a result of the low transmissivity of the goggles as well as the use of only red phosphor, which was used to minimize the cross talk of the goggles. It is unlikely that the temporal and binocular cues affect glossiness perception in mesopic vision alone. However, since the retinal rod cells are involved as well as the cone cells in mesopic vision, and the rod and the cone cells have different sensitivities to luminance distribution across space (Stiles, 1959, for instance), the magnitudes of the two cues' effects in the photopic vision could be somewhat different from that in mesopic vision. 
Conclusions
Based on a computational consideration of glossiness perception, we found that it is theoretically possible to estimate the surface reflectance properties from luminance obtained at different viewpoints. Using psychophysical methods, we also found that perceived glossiness is strongly enhanced by temporal changes in the retinal image caused by the observer's head motion and image differences between the two eyes in stereo viewing. Combining these findings suggest that the human visual system utilizes the rational methods for the perception of surface glossiness. As far as we know, the present study is the first that tested the theoretical consideration of glossiness perception in a psychophysical way. Our data also suggest that the combination of multiple retinal images plays an important role in glossiness perception, just as it is assumed to do in 3D shape perception (i.e., 3D shape perception from binocular disparity and that from motion parallax). Due to the existences of these two cues to 3D shape, the theoretical aspects of which have been well studied, the mechanisms for 3D shape perception have been extensively investigated. Thus, further studies on the theoretical aspects of glossiness perception could be important in clarifying the mechanisms of glossiness perception. 
Supplementary Materials
Supplementary Figure 1 - Supplementary Figure 1 
Supplementary Figure 1. A captured movie of the dynamic and the static stimulus used in Experiment 1. The stimuli were viewed monocularly in this experiment. Note that the value of specular reflectance (Rs) of the Phong model (Phong, 1975) is raised to 0.15 in the movie from 0.08, which was used in the rating experiment, to compensate the weakened effects of the temporal cue due to image degradation caused in the process of movie capturing. The dimensions of the stimulus movie are identical to those of the stimuli used in the experiments (1024 pixels x 768 pixels). 
Supplementary Figure 2 - Supplementary Figure 2 
Appendix A
We examined whether it is theoretically possible to estimate how reflectance changes with reflection direction by using the intensity values (luminance) of the identical point on the surface obtained at two different viewpoints. 
Physically, luminance values for two viewpoints (L 1, L 2) are described as 
L 1 = r 1 E ,
(A1)
and 
L 2 = r 2 E ,
(A2)
where r 1 and r 2 are the reflectances for the combinations of the directions from the surface point to the respective viewpoints and the distributions of incident angles while E is the whole intensity of incident light in units of luminance (Boyaci, Maloney, & Hersh, 2003). These equations are essentially identical to Equation 1 of Bloj et al. (2004) except that our equations do not assume a Lambertian matte surface but assumes a general glossy surface. The difference in intensity between the two viewpoints is described as 
L 1 L 2 = E d 1 , 2 ,
(A3)
where d 1,2r 1r 2. Therefore, the reflectance difference between the two viewpoints is described as 
d 1 , 2 = ( L 1 L 2 ) / E .
(A4)
 
Thus, theoretically, if the intensity of the incident light can be estimated, it is possible to estimate the reflectance difference between the two viewpoints as 
d 1 , 2 = ( L 1 L 2 ) / E ,
(A5)
where each variable with a prime means an estimated analogue of each physical quantity. 
Then, is it possible for the human visual system to estimate the intensity of the incident light? We assume it is. Theoretically, the accurate estimation of surface albedo (lightness) is also assumed to need to discount the change in intensity of the incident light (Bloj et al., 2004; Gilchrist, 1977; Gilchrist et al., 1999; Snyder, Doerschner, & Maloney, 2005). The actual human visual system is also assumed to discount it to some extent, resulting in lightness constancy (Gilchrist, 1977; Gilchrist et al., 1999; Snyder et al., 2005) although the constancy may be incomplete (Snyder et al., 2005), and the incident light intensity “estimated” for lightness perception could be different from what is “explicitly perceived” (Rutherford & Brainard, 2002). Thus, we believe it is not too unlikely to assume that the human visual system estimates the intensity of the incident light and uses it for the perception of surface glossiness, although the estimate could differ from the veridical value to some extent in some situations (Olkkonen & Brainard, 2010). 
Physically, the reflectance difference between the two viewpoints (d 1,2) is generally smaller than or happens to be equal to the maximum difference in reflectance among all the combinations of the incident and the reflection direction. Thus, if d 1,2′ can be estimated, it is theoretically possible to estimate that the surface reflectance changes across reflection directions by a certain amount that is greater than or equal to d 1,2′. The human visual system could have a certain model to estimate from d 1,2′ the reflectance distribution across the reflection directions (Figure A1). 
Figure A1
 
A model the human could have in order to estimate the reflectance distribution function across the reflection directions. The difference (d max′) in reflectance between the peak and the bottom of the function should be larger than or equal to d 1,2′. If the two viewpoints are the two eyes, the difference (D) between two viewpoints in the angular distance from the peak of the distribution is smaller than or equal to the vergence angle, which could be estimated. Thus, from the estimated D (D′) and d 1,2′, as well as a model function (Gaussian distribution plus a constant value, for instance), it could also be estimated that the magnitude of angular extent of the reflectance function (standard deviation, for instance) is smaller than or equal to a certain value, because the maximum slope of the function should be steeper than or equal to (d 1,2′/D′).
Figure A1
 
A model the human could have in order to estimate the reflectance distribution function across the reflection directions. The difference (d max′) in reflectance between the peak and the bottom of the function should be larger than or equal to d 1,2′. If the two viewpoints are the two eyes, the difference (D) between two viewpoints in the angular distance from the peak of the distribution is smaller than or equal to the vergence angle, which could be estimated. Thus, from the estimated D (D′) and d 1,2′, as well as a model function (Gaussian distribution plus a constant value, for instance), it could also be estimated that the magnitude of angular extent of the reflectance function (standard deviation, for instance) is smaller than or equal to a certain value, because the maximum slope of the function should be steeper than or equal to (d 1,2′/D′).
On the other hand, if there is no difference in intensity between the two viewpoints, the value of d 1,2 or E is zero. If E equals zero, the surface is invisible. Thus, in this situation, there is no change in reflectance with the reflection direction; the two viewpoints are far from the mirror reflection direction, or the reflectance of the two viewpoints happens to be equal (the two viewpoints are exactly equidistant from the mirror reflection direction, for instance). Let us consider a surface with a certain area and a variety of surface orientations. If there is no luminance differences between the two viewpoints at any point on the surface, the surface could have a uniform reflectance across reflection directions because the situation in which for any point on the surface the two viewpoints are far from the mirror reflection direction or the reflectance of the two viewpoints happens to be equal is somewhat unlikely. 
In summary, if there is an intensity difference between the two viewpoints, it is theoretically possible to estimate that the magnitude of the reflectance changes across the reflection directions is larger than a certain value that can be estimated and is larger than zero. In addition, if there is no luminance difference between the two viewpoints at any point on the surface that has a variety of surface orientations, the surface could have a uniform reflectance across reflection directions. 
Here we discussed a case involving only two images. With more images from head motion, for example, it is highly possible to establish more reliable estimation of glossiness parameters. 
Acknowledgments
We thank the anonymous reviewers for their comments on an earlier version of this manuscript. 
Commercial relationships: none. 
Corresponding author: Yuichi Sakano. 
Email: yuichi@nict.go.jp. 
Address: Universal Media Research Center, National Institute of Information and Communications Technology, 2-2-2 Hikaridai, Keihanna Science City, Kyoto 619-0288, Japan. 
References
Anderson B. L. Kim J. (2009). Image statistics do not explain the perception of gloss and lightness. Journal of Vision, 9, (11):10, 1–17, http://www.journalofvision.org/content/9/11/10, doi:10.1167/9.11.10. [PubMed] [Article] [CrossRef] [PubMed]
Anstis S. M. (2000). Monocular lustre from flicker. Vision Research, 40, 2551–2556. [CrossRef] [PubMed]
ASTM D523-08(2008). Standard test method for specular gloss.. West Conshohocken, PA: ASTM International.
Beck J. Prazdny S. (1981). Highlights and the perception of glossiness. Perception & Psychophysics, 30, 407–410. [CrossRef] [PubMed]
Berzhanskaya J. Swaminathan G. Beck J. Mingolla E. (2005). Remote effects of highlights on gloss perception. Perception, 34, 565–575. [CrossRef] [PubMed]
Blake A. Bulthoff H. (1990). Does the brain know the physics of specular reflection? Nature, 343, 165–168. [CrossRef] [PubMed]
Bloj M. Ripamonti C. Mitha K. Hauck R. Greenwald S. Brainard D. H. (2004). An equivalent illuminant model for the effect of surface slant on perceived lightness. Journal of Vision, 4, (9):6, 735–746, http://www.journalofvision.org/content/4/9/6, doi:10.1167/4.9.6. [PubMed] [Article] [CrossRef]
Boyaci H. Maloney L. T. Hersh S. (2003). The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. Journal of Vision, 3, (8):2, 541–553, http://www.journalofvision.org/content/3/8/2, doi:10.1167/3.8.2. [PubMed] [Article] [CrossRef]
Brewster D. (1852). Examination of Dove's theory of lustre. Athenaeum, 1041.
Brewster D. (1855). On the binocular vision of surfaces of different colours. Report of British Association, 2, 9.
Brewster D. (1861). On binocular lustre. Report of British Association, 2, 29–31.
Burr D. C. Ross J. Morrone M. C. (1986). A spatial illusion from motion rivalry. Perception, 15, 59–66. [CrossRef] [PubMed]
DeAngelis G. C. Cumming B. G. Newsome W. T. (1998). Cortical area MT and the perception of stereoscopic depth. Nature, 394, 677–680. [CrossRef] [PubMed]
Doerschner K. Boyaci H. Maloney L. T. (2010). Estimating the glossiness transfer function induced by illumination change and testing its transitivity. Journal of Vision, 10, (4):8, 1–9, http://www.journalofvision.org/content/10/4/8, doi:10.1167/10.4.8. [PubMed] [Article] [CrossRef] [PubMed]
Dove H. W. (1850). Ueber die Ursachen des Glanzes und der Irradiation, abgeleitet aus chromatischen Versuchen mit dem Stereoskop. Poggendorffs Annalen, 83, 169–183.
Dror R. O. Adelson E. H. Willsky A. S. (2001). Surface reflectance estimation and natural illumination statistics. In Proceedings of IEEE Workshop on Statistical and Computational Theories of Vision. Vancouver, Canada.
Ferwerda J. A. Pellacini F. Greenberg D. P. (2001). A psychophysically based model of surface gloss perception. Proceedings of SPIE, 4299, 291–301.
Fleming R. W. Dror R. O. Adelson E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3, (5):3, 347–368, http://www.journalofvision.org/content/3/5/3, doi:10.1167.3.5.3. [PubMed] [Article] [CrossRef]
Gilchrist A. L. (1977). Perceived lightness depends on perceived spatial arrangement. Science, 195, 185–187. [CrossRef] [PubMed]
Gilchrist A. Kossyfidis C. Bonato F. Agostini T. Cataliotti J. Li X. et al. (1999). An anchoring theory of lightness perception. Psychological Review, 106, 795–834. [CrossRef] [PubMed]
Hartung B. Kersten D. (2002). Distinguishing shiny from matte [Abstract]. Journal of Vision, 2, (7):551, 551a, http://www.journalofvision.org/content/2/7/551, doi:10.1167/2.7.551. [CrossRef]
Helmholtz H. (2005). Treatise on physiological optics. Mineola, NY: Dover Publications. (Original work published 1867)
Hering E. (1964). Outlines of a theory of the light sense. Cambridge, MA: Harvard University Press. (Original work published 1874)
Howard I. P. (1995). Depth from binocular rivalry without spatial disparity. Perception, 24, 67–74. [CrossRef] [PubMed]
Howard I. P. Rogers B. J. (1995). Binocular vision and stereopsis. New York: Oxford University Press.
Howard I. P. Rogers B. J. (2002). Seeing in depth. Thornhill, ON: I Porteous.
Hunter R. S. Harold R. W. (1987). The measurement of appearance (2nd ed.). New York: Wiley.
Hurlbert A. C. Cumming B. G. Parker A. J. (1991a). Constraints of specularity motion on glossiness and shape perception. Paper presented at the ECVP 1991.
Hurlbert A. C. Cumming B. G. Parker A. J. (1991b). Recognition and perceptual use of specular reflections. Investigative Ophthalmology & Visual Science, 32, 105.
Kim J. Anderson B. L. (2010). Image statistics and the perception of surface gloss and lightness. Journal of Vision, 10, (9):3, 1–17, http://www.journalofvision.org/content/10/9/3, doi:10.1167/10.9.3. [PubMed] [Article] [CrossRef]
Landy M. S. Maloney L. T. Johnston E. B. Young M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. [CrossRef] [PubMed]
Ludwig I. Pieper W. Lachnit H. (2007). Temporal integration of monocular images separated in time: Stereopsis, stereoacuity, and binocular luster. Perception & Psychophysics, 69, 92–102. [CrossRef] [PubMed]
Maloney L. T. Landy M. S. (1989). A statistical framework for robust fusion of depth information. In Pearlman W. A. (Ed.), Visual communications and image processing: IV. Proceedings of the SPIE (vol. 1199, 1154–1163). Bellingham: SPIE-International Society for Optical Engineering.
Motoyoshi I. Nishida S. Sharan L. Adelson E. H. (2007). Image statistics and the perception of surface qualities. Nature, 447, 206–209. [CrossRef] [PubMed]
Nadler J. W. Angelaki D. E. DeAngelis G. C. (2008). A neural representation of depth from motion parallax in macaque visual cortex. Nature, 452, 642–645. [CrossRef] [PubMed]
Nagata M. Okajima K. Osumi M. (2007). Quantification of gloss perception as a function of stimulus duration. Optical Review, 14, 406–410. [CrossRef]
Nishida S. Shinya M. (1998). Use of image-based information in judgments of surface-reflectance properties. Journal of the Optical Society of America A, 15, 2951–2965. [CrossRef]
Obein G. Knoblauch K. Vienot F. (2004). Difference scaling of gloss: Nonlinearity, binocularity, and constancy. Journal of Vision, 4, (9):4, 711–720, http://www.journalofvision.org/content/4/9/4, doi:10.1167/4.9.4. [PubMed] [Article] [CrossRef]
Obein G. Pichereau T. Harrar M. Monot A. Knoblauch K. Vienot F. (2004). Does binocular vision contribute to gloss perception [Abstract]. Journal of Vision, 4, (11):73, 73a, http://www.journalofvision.org/content/4/11/73, doi:10.1167/4.11.73. [CrossRef]
Olkkonen M. Brainard D. H. (2010). Perceived glossiness and lightness under real-world illumination. Journal of Vision, 10, (9):5, 1–19, http://www.journalofvision.org/content/10/9/5, doi:10.1167/10.9.5. [PubMed] [Article] [CrossRef] [PubMed]
Paille D. Monot A. Dumont-Becle P. Kemeny A. (2001). Luminance binocular disparity for 3D surface simulation. Proceedings of SPIE, 4299, 622–633.
Pellacini F. Ferwerda J. A. Greenberg D. P. (2000). Toward a psychophysically based light reflection model for image synthesis. Computer Graphics, 34, 55–64. [CrossRef]
Phong B. T. (1975). Illumination for computer generated pictures. Communications of the ACM, 18, 311–317. [CrossRef]
Pieper W. Ludwig I. (2001). Binocular vision: Rivalry, stereoscopic lustre, and sieve effect. Perception, 30, ECVP Abstract Supplement.
Pieper W. Ludwig I. (2002). The minimum luminance-contrast requirements for stereoscopic lustre. Perception, 31, ECVP Abstract Supplement.
Preston J. M. (1931). Theories of lustre. Journal of the Society of Dyers and Colourists, 47, 136–143. [CrossRef]
Rogers B. Graham M. (1979). Motion parallax as an independent cue for depth perception. Perception, 8, 125–134. [CrossRef] [PubMed]
Rogers B. Graham M. (1982). Similarities between motion parallax and stereopsis in human depth perception. Vision Research, 22, 261–270. [CrossRef] [PubMed]
Rutherford M. D. Brainard D. H. (2002). Lightness constancy: A direct test of the illumination-estimation hypothesis. Psychological Science, 13, 142–149. [CrossRef] [PubMed]
Snyder J. L. Doerschner K. Maloney L. T. (2005). Illumination estimation in three-dimensional scenes with and without specular cues. Journal of Vision, 5, (10):8, 863–877, http://www.journalofvision.org/content/5/10/8, doi:10.1167/5.10.8. [PubMed] [Article] [CrossRef]
Stiles W. S. (1959). Color vision: The approach through increment threshold sensitivity. Proceedings of the National Academy of Sciences, 45, 100–114. [CrossRef]
Tyler C. W. (2004). Binocular vision. In Tasman W. Jaeger E. A. (Eds.), Duane's foundations of clinical ophthalmology. (vol. 2, pp. 1–29). Philadelphia, PA: J.B. Lippincott Co.
Uka T. DeAngelis G. C. (2003). Contribution of middle temporal area to coarse depth discrimination: Comparison of neuronal and psychophysical sensitivity. Journal of Neuroscience, 23, 3515–3530. [PubMed]
Uka T. DeAngelis G. C. (2004). Contribution of area MT to stereoscopic depth perception: Choice-related response modulations reflect task strategy. Neuron, 42, 297–310. [CrossRef] [PubMed]
Uka T. DeAngelis G. C. (2006). Linking neural representation to function in stereoscopic depth perception: Roles of the middle temporal area in coarse versus fine disparity discrimination. Journal of Neuroscience, 26, 6791–6802. [CrossRef] [PubMed]
Wendt G. Faul F. Mausfeld R. (2008). Highlight disparity contributes to the authenticity and strength of perceived glossiness. Journal of Vision, 8, (1):14, 1–10, http://www.journalofvision.org/content/8/1/14, doi:10.1167/8.1.14. [PubMed] [Article] [CrossRef] [PubMed]
Wheatstone C. (1838). On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society of London, 128, 371–394. [CrossRef]
Figure 1
 
Reflectance, head motion, and stereo viewing. (A) Head motion and reflectances of (top) a glossy surface and (bottom) a matte surface. When the observer's eye is looking in the direction of the specular reflection of a glossy surface, the luminance of the surface point is high. Once the eye moves out of the specular direction, luminance decreases immediately. In contrast, when viewing a matte surface, the luminance of the surface is unaffected by the location of the eye. Thus, it is theoretically possible to distinguish a glossy surface from a matte one. (B) Stereo viewing and reflectances of (top) a glossy surface and (bottom) a matte surface. The luminance of the surface for the eye looking in the direction of the specular reflection of the glossy surface is much higher than that for the other eye not looking in the specular direction. In contrast, when viewing a matte surface, the luminance of the surface for one eye is equal to that for the other eye. Thus, it is theoretically possible to distinguish the glossy surface from the matte one.
Figure 1
 
Reflectance, head motion, and stereo viewing. (A) Head motion and reflectances of (top) a glossy surface and (bottom) a matte surface. When the observer's eye is looking in the direction of the specular reflection of a glossy surface, the luminance of the surface point is high. Once the eye moves out of the specular direction, luminance decreases immediately. In contrast, when viewing a matte surface, the luminance of the surface is unaffected by the location of the eye. Thus, it is theoretically possible to distinguish a glossy surface from a matte one. (B) Stereo viewing and reflectances of (top) a glossy surface and (bottom) a matte surface. The luminance of the surface for the eye looking in the direction of the specular reflection of the glossy surface is much higher than that for the other eye not looking in the specular direction. In contrast, when viewing a matte surface, the luminance of the surface for one eye is equal to that for the other eye. Thus, it is theoretically possible to distinguish the glossy surface from the matte one.
Figure 2
 
Methods of the experiments. (A) The experimental apparatus. The IR emitter fixed to the stereo goggles was tracked by the IR position sensor during the experiments so that the stimulus presented on the CRT could be changed depending on the subject's head position in real time. (B) An example of the stimulus used in the experiments (here reduced in size). (C) The dynamic and static stimuli used in Experiment 1. The dynamic stimulus changed temporally in luminance and in 2D shape on the monitor depending on the observer's head position, simulating a stationary surface in a 3D space. The static stimulus did not change over time on the 2D computer monitor. (D) The stereo and non-stereo stimuli used in Experiment 2. The non-stereo stimulus presented to the left and right eyes was what should be seen from the midpoint of the eyes.
Figure 2
 
Methods of the experiments. (A) The experimental apparatus. The IR emitter fixed to the stereo goggles was tracked by the IR position sensor during the experiments so that the stimulus presented on the CRT could be changed depending on the subject's head position in real time. (B) An example of the stimulus used in the experiments (here reduced in size). (C) The dynamic and static stimuli used in Experiment 1. The dynamic stimulus changed temporally in luminance and in 2D shape on the monitor depending on the observer's head position, simulating a stationary surface in a 3D space. The static stimulus did not change over time on the 2D computer monitor. (D) The stereo and non-stereo stimuli used in Experiment 2. The non-stereo stimulus presented to the left and right eyes was what should be seen from the midpoint of the eyes.
Figure 3
 
Results of all the experiments. (A) Results averaged across the subjects. The results of the rating tasks are shown in the top panel. The dashed line indicates the glossiness of the dynamic (in Experiments 1 and 3) and stereo (in Experiments 2 and 4) stimuli, which equals ten. Results of the matching tasks are shown in the bottom panel. The dashed line indicates the equality level of the specular reflectance of the dynamic and static stimuli in Experiments 1 and 3, and the stereo and non-stereo stimuli in Experiments 2 and 4. Error bars indicate ±1 SEM. (B) Results of the individual subjects. A pair of asterisks and a single asterisk mean statistically significant differences between the data levels (bars) and the levels of the dashed lines in terms of p values of less than 0.01 and 0.05, respectively (one-tailed t-test). Out of 56 bars, 54 bars were significantly different from the dashed lines.
Figure 3
 
Results of all the experiments. (A) Results averaged across the subjects. The results of the rating tasks are shown in the top panel. The dashed line indicates the glossiness of the dynamic (in Experiments 1 and 3) and stereo (in Experiments 2 and 4) stimuli, which equals ten. Results of the matching tasks are shown in the bottom panel. The dashed line indicates the equality level of the specular reflectance of the dynamic and static stimuli in Experiments 1 and 3, and the stereo and non-stereo stimuli in Experiments 2 and 4. Error bars indicate ±1 SEM. (B) Results of the individual subjects. A pair of asterisks and a single asterisk mean statistically significant differences between the data levels (bars) and the levels of the dashed lines in terms of p values of less than 0.01 and 0.05, respectively (one-tailed t-test). Out of 56 bars, 54 bars were significantly different from the dashed lines.
Figure A1
 
A model the human could have in order to estimate the reflectance distribution function across the reflection directions. The difference (d max′) in reflectance between the peak and the bottom of the function should be larger than or equal to d 1,2′. If the two viewpoints are the two eyes, the difference (D) between two viewpoints in the angular distance from the peak of the distribution is smaller than or equal to the vergence angle, which could be estimated. Thus, from the estimated D (D′) and d 1,2′, as well as a model function (Gaussian distribution plus a constant value, for instance), it could also be estimated that the magnitude of angular extent of the reflectance function (standard deviation, for instance) is smaller than or equal to a certain value, because the maximum slope of the function should be steeper than or equal to (d 1,2′/D′).
Figure A1
 
A model the human could have in order to estimate the reflectance distribution function across the reflection directions. The difference (d max′) in reflectance between the peak and the bottom of the function should be larger than or equal to d 1,2′. If the two viewpoints are the two eyes, the difference (D) between two viewpoints in the angular distance from the peak of the distribution is smaller than or equal to the vergence angle, which could be estimated. Thus, from the estimated D (D′) and d 1,2′, as well as a model function (Gaussian distribution plus a constant value, for instance), it could also be estimated that the magnitude of angular extent of the reflectance function (standard deviation, for instance) is smaller than or equal to a certain value, because the maximum slope of the function should be steeper than or equal to (d 1,2′/D′).
Supplementary Figure 1
Supplementary Figure 2
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×