Open Access
Article  |   October 2019
Luminance edge is a cue for glossiness perception based on low-luminance specular components
Author Affiliations
  • Hiroaki Kiyokawa
    Department of Informatics, Yamagata University, Yonezawa, Japan
    [email protected]
  • Tomonori Tashiro
    Department of Informatics, Yamagata University, Yonezawa, Japan
    [email protected]
  • Yasuki Yamauchi
    Department of Informatics, Yamagata University, Yonezawa, Japan
    [email protected]
  • Takehiro Nagai
    Department of Informatics, Yamagata University, Yonezawa, Japan
    Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
    [email protected]
Journal of Vision October 2019, Vol.19, 5. doi:https://doi.org/10.1167/19.12.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hiroaki Kiyokawa, Tomonori Tashiro, Yasuki Yamauchi, Takehiro Nagai; Luminance edge is a cue for glossiness perception based on low-luminance specular components. Journal of Vision 2019;19(12):5. https://doi.org/10.1167/19.12.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual system is considered to employ various image cues from an object image to perceive its glossiness. It has been reported that, surprisingly, even for object images without specular highlights we can perceive glossiness by relying on low-luminance specular components (Kim, Marlow, & Anderson, 2012). This type of perceptual glossiness is referred to as dark gloss. However, it is still unclear whether dark gloss is observed commonly across various objects, and what image features are cues for dark gloss. To address these issues, we performed several psychophysical experiments. First, we measured perceived glossiness for a number of computer-graphics object images with natural specular reflection components (Full condition) and for those without high-luminance components of specular reflections (Dark condition). The results showed that dark gloss (glossiness perception in the Dark condition) was generally observed on almost all object images, while its intensity was rather different across the images. Then we psychologically or computationally measured several image features for the stimulus images, such as luminance edge number, recognizability of reflection images, and some highlight-related features, to examine their relations to perceived glossiness with a multiple regression analysis. The results demonstrated that luminance edge number was most strongly related to glossiness scores among the measured features only for object images with potent dark gloss. These results suggest that luminance edges are an effective cue for dark gloss under certain stimulus conditions.

Introduction
When we see objects in the real world, we perceive their various surface qualities, such as glossiness, effortlessly. Retinal images are created mainly based on three physical factors of visual scenes: the objects' shapes, the optical properties of the objects' surfaces, and the illumination environment around the objects. Surface quality perception can be considered a function of the visual system to estimate the optical properties of objects from retinal images. However, from the perspective of computational theory, the inverse calculation of optical properties from retinal images is an ill-posed problem; the interaction of these physical properties in creating retinal images is highly complex, and different physical situations can nevertheless yield completely identical retinal images (Thompson, Fleming, Creem-Regehr, & Stefanucci, 2011). Consequently, the visual system must estimate the optical properties of object surfaces using cues about them embedded as heuristics in the retinal images, similar to the case of color constancy. 
Different kinds of heuristic cues for glossiness perception in particular have been suggested; earlier studies have suggested that the visual system estimates object glossiness depending on lower-order image statistics (Fleming, Dror, & Adelson, 2003; Motoyoshi, Nishida, Sharan, & Adelson, 2007; Motoyoshi & Matoba, 2012; Wiebel, Toscani, & Gegenfurtner, 2015). For instance, Motoyoshi et al. (2007) have reported that skewness of luminance and subband histograms in object images correlates well with perceived glossiness. In addition, they reported observations that both manipulation of luminance skewness in object images and adaptation to luminance-skewed dot images changed perceived glossiness. Similarly, Wiebel et al. (2015) examined the relationships of various image luminance statistics to perceived glossiness and reported that when more diverse image sets were used as stimuli, the standard deviation of the luminance histogram (i.e., luminance contrast) was more informative for glossiness than luminance skewness. 
However, perceived glossiness cannot be explained by lower-level image statistics alone, because images with different reflectance properties can yield the same statistical properties. For example, not only glossy objects but also images of matte surface objects can have positive histogram skewness (Anderson & Kim, 2009); clearly, image statistics alone do not guarantee surface glossiness. Therefore, not only simple image statistics but also more intricate image features must be involved. In particular, the roles of specular highlights—typically high-luminance regions on object surfaces—in glossiness perception have been frequently reported. For instance, Ferwerda, Pellacini, and Greenberg (2001) have reported that glossiness perception can be described well with two kinds of image features in specular highlights: contrast to diffuse reflections and distinctness of the surroundings. Marlow, Kim, and Anderson (2012) have indicated that other properties of specular highlights, such as coverage and depth derived from binocular disparity, are also strongly correlated with glossiness perception, and that the combination of these highlight-related image features is powerful enough to explain much of the perceived glossiness in their stimulus image sets. These properties of specular highlights are strongly tied to optics, and therefore it is reasonable that the visual system would rely on these features for glossiness perception. For instance, incident light is reflected strongly and sharply on surfaces with high specular reflectance and smoothness, leading to high contrast between specular highlights and diffuse components. Similarly, even during fixation at a certain point of an object surface, there is typically binocular disparity in specular reflection components, but not in diffuse ones (Wendt, Faul, & Mausfeld, 2008; Wendt, Faul, Ekroll, & Mausfeld, 2010). 
Low-luminance regions of specular reflections, not only specular highlights, have also been reported to contribute to glossiness (Fleming et al., 2003; Kim, Marlow, & Anderson, 2012). Smoothness of microstructure on object surfaces makes the reflectance lobe narrow, and therefore contributes to glossy impressions (Ferwerda et al., 2001). In addition, this smoothness yields darker regions, not only specular highlights, on the object surface as compared with surfaces with rough microstructure if total amounts of reflected light are the same between these smooth and rough surfaces, because most lights are reflected only toward specular directions on smooth surfaces. Even though observers cannot directly recognize where on object surfaces are darker regions of specular reflections, such dark regions should also have some properties unique to specular reflections, like mirrored reflections of surroundings. Therefore, these low-luminance components may also be informative in estimating reflectance properties of objects. In psychophysics, Kim et al. (2012) have reported that human observers could perceive glossiness from images with only low-luminance specular reflectance components, created by replacing luminance components in specular highlight regions with those in completely matte objects (hereafter, we refer to this glossiness perceived based on low-luminance regions as dark gloss). Their results imply that low-luminance regions, not only specular highlights, contain image features that act as cues for glossiness perception. 
However, the precise mechanisms underlying dark gloss and its perceptual properties remain unclear; Kim et al. (2012) showed only the existence of dark gloss perception. The first issue requiring clarification is the generality of dark gloss across different objects. Because the stimuli of Kim et al. were rendered under a limited number of physical conditions, such as three-dimensional object shapes and illumination environments, it is unclear whether dark gloss is observed on a variety of objects or on only those with specific physical or image properties. The second issue is which image features act as cues for dark gloss. Kim et al. claim that image statistics cannot account for dark gloss, based on the fact that dark gloss stimuli appeared much glossier than the matte object images, even when their luminance mean, contrast, and skewness were equalized. Later studies have provided several other candidates for dark gloss cues. Kim, Tan, and Chowdhury (2016), for instance, have suggested that luminance edges of specular reflection components like those in the mirrored reflection of the surroundings may play a role in glossiness perception, independent of image statistics or specular highlights, and reported that adaptation to luminance contours that are geometrically correlated with luminance edges on images of mirrorlike objects decreases their perceived glossiness. Because luminance edges exist in both specular highlights and low-luminance regions, they may be considered a potential cue for dark gloss. However, to our knowledge, no studies have directly investigated the relationship between dark gloss and luminance edges. 
The present study addresses two issues regarding dark gloss. First, we investigate the generality of dark gloss by enlarging the stimulus set as compared with Kim et al. (2012). Second, we investigate whether the luminance edges of specular reflection components are truly an effective cue for dark gloss. To explore these issues, we conducted a psychophysical experiment and analyzed the results based on the physical and psychological features of stimulus images. In the experiment, we asked observers to perform a glossiness rating task on a number of computer-graphics (CG) images with various shapes, reflectance properties, and illumination maps as stimuli. We quantified the influence of low-luminance specular components on perceived glossiness by comparing perceived glossiness of stimuli with the presence or absence of specular highlights; if low-luminance regions are a dominant cue for glossiness perception, the glossiness scores should remain constant regardless of the highlight condition. We then quantified different kinds of physical and psychological image features in the stimuli, such as the number of luminance edges, recognizability of reflection image, and several perceptual features of specular highlights. Finally, we performed multiple regression analysis of glossiness scores based on these image features to determine which image features best explain dark gloss. Our findings indicate that luminance edges are most relevant to glossiness perception for object images in which the specular highlights are not effective. In contrast, they contribute little to glossiness perception for object images in which specular highlights dominate perceived glossiness. These results suggest that luminance edges are an important cue for dark gloss, and that their effectiveness changes dramatically with the effectiveness of specular highlights. 
Main experiment: Glossiness rating
In this experiment, we examined the generality of the results of Kim et al. (2012) regarding dark gloss across different types of stimuli. We rendered numerous object images with various shapes, reflectance properties, and illumination environments, and from these created two types of stimulus images: those with specular highlights (the Full condition) and those without (the Dark condition). Observers rated the glossiness of these images, and we compared the glossiness scores between the two conditions. The generality of dark gloss would be supported by glossiness scores in the Dark condition comparable to those in the Full condition for all the images. 
Methods
Observers
Ten male observers participated in the experiment. One participant was an author (HK) of the present study. All had normal or corrected-to-normal visual acuity. All experimental protocols were approved by the ethical committee of the Faculty of Engineering, Yamagata University, and followed the Code of Ethics of the World Medical Association (Declaration of Helsinki). Written informed consent was obtained from all participants. 
Apparatus
All stimuli were generated using a desktop personal computer (Vostro 3900, Dell; Intel Core i5-4460; GeForce GTX 745; Ubuntu 14.04 LTS) and presented on a 27-in. LCD monitor (ColorEdge CX271-CN, EIZO; 2,560 × 1,440 pixels). The experimental procedures were controlled on the computer using MATLAB (MathWorks, Natick, MA) and Psychtoolbox 3.0 (Brainard, 1997). The gamma properties and spectral distributions of the monitor were measured with a colorimeter (ColorCAL II, Cambridge Research Systems, Rochester, UK) and a spectral photometer (SpectroCAL, Cambridge Research Systems), respectively, to calibrate luminance. Observers responded using a trackball connected to the computer. Stimuli were viewed binocularly in a dark room, with observers at a distance of approximately 57 cm from the monitor. 
Stimuli
An example of a stimulus presented to the observers is shown in Figure 1. The display consisted of a test stimulus, five reference stimuli, and an evaluation axis. The test and reference stimuli were CG images. Observers rated the perceived glossiness of the test stimulus with reference to the glossiness of the reference stimuli. The evaluation axis was used to record observers' responses. 
Figure 1
 
Example stimulus. The top object image is the test stimulus. The bottom five objects are the reference stimuli. Observers rated the perceived glossiness of the test stimulus by moving a red circle on the evaluation axis, shown at the center.
Figure 1
 
Example stimulus. The top object image is the test stimulus. The bottom five objects are the reference stimuli. Observers rated the perceived glossiness of the test stimulus by moving a red circle on the evaluation axis, shown at the center.
All test stimuli were images of rock-shaped objects composed of 6,146 vertices, with their surfaces smoothed using the built-in smoothing function in Blender 2.77a, an open-source CG software program (Blender Foundation, 2016). The geometries, object shapes, and camera position were all configured with this software. Three-dimensional surface shapes were generated using Blender's displacement algorithm, which transforms a sphere into a random bumpy shape by altering vertex heights according to the luminance values of a random cloud-pattern texture. 
The object images were rendered with Mitsuba (Jakob, 2010) and RenderToolbox3 (Heasly, Cottaris, Lichtman, Xiao, & Brainard, 2014). The Ward model (Ward, 1992) was used for description of the surface-reflection properties. This model has three main parameters: specular reflection ρs, diffuse reflection ρd, and surface roughness α. The images were rendered under different physical parameters, shown in Table 1, to increase the diversity of the test stimuli. The effects of these parameters on the resultant images are summarized in Appendix A. Specularity level was our own parameter, and did not exist in the software. This parameter determines the balance between specular and diffuse reflectance components, in the same manner as Kim et al. (2012), according to  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}{ {I\left( {x,y} \right) = A \times {\rho _s}\left( {x,y} \right) + \left( {1{\rm{\ }}-{\rm{\ }}A} \right) \times {\rho _d}\left( {x,{\rm{\ }}y} \right),}} \end{equation}
where I is the luminance of a given image pixel (x, y), ρs is the luminance of the specular component (luminance value for an object whose diffuse and specular reflectance are 0.0 and 1.0, respectively), ρd is the luminance of the diffuse component (luminance value of an object whose diffuse and specular reflectance are 0.4 and 0.0, respectively), and A is the specularity level. From this rendering procedure, we created a total of 1,296 object images. The image size was 400 × 400 pixels, and the object regions occupied an area of approximately 5.2° of visual angle in width and height.  
Table 1
 
Physical parameters in rendering and their values for test stimuli.
Table 1
 
Physical parameters in rendering and their values for test stimuli.
After creating the test object images, we modified them to present two conditions with respect to the presence or absence of specular highlights: the Full and Dark conditions, respectively. In the Full condition, the rendered images were used as stimulus images with no modification. In the Dark condition, the rendered images were modified as follows. First, luminance values of a matte image, which was created by setting A in Equation 1 to zero, were compared pixel by pixel with those of a corresponding original rendered image (the full image). Second, pixels whose luminance values were lower in the matte object image than in the full image were identified as highlight pixels. Finally, the luminance values of all highlight pixels in the full image were replaced with those of the matte object image. Thus, in the Dark condition the luminance values of specular highlights were replaced by those of corresponding pixels in the matte object images, whereas the low-luminance regions were not modified. From this procedure, we created 2,592 test stimuli (1,296 objects in each of the Full and Dark conditions). Some examples of the test stimuli are shown in Figure 2. It should be noted that observers cannot select highlight pixels on our stimuli accurately, since they were defined on the basis of matte object images that the observers never see directly. In other words, the highlight pixels do not necessarily correspond to perceptual highlight regions. 
Figure 2
 
Examples of test stimuli in the Dark (left) and Full (right) conditions.
Figure 2
 
Examples of test stimuli in the Dark (left) and Full (right) conditions.
Finally, to prevent simple image statistics from acting as cues for perceived glossiness (Motoyoshi et al., 2007; Wiebel et al., 2015), the mean luminance and root mean square (RMS) contrast of the stimulus images were equalized across all samples within each of the Dark and Full conditions. The mean luminance was 14.2 cd/m2 in the Dark condition and 17.5 cd/m2 in the Full condition. The RMS contrast was 0.88 in the Dark condition and 1.13 in the Full condition. For pixels whose luminance was outside the maximum (79.9 cd/m2) and minimum (0.51 cd/m2) luminance of the monitor (8.5% of pixels throughout all images), luminance values were replaced by the maximum or minimum value. 
The reference stimuli were generated similarly to the test stimuli, but with a novel shape and an outdoor illumination map, Doge. The five levels of glossiness were controlled by the combination of specularity level A and surface roughness α in the Ward model: (A, α) = (0, 0.001), (0.25, 0.01), (0.5, 0.02), (0.75, 0.1), and (1.0, 0.3). These parameters were empirically determined such that perceived glossiness was equally scaled. The same reference stimuli were used for the Dark and Full conditions. All reference stimulus images were captured with a frontal camera position. 
Procedure
We adopted a simple rating task to measure perceived glossiness of the stimuli. During a trial, a test stimulus was presented to the observer, as shown in Figure 1. During stimulus observation, the observer was asked to rate the perceived glossiness of the test stimulus on a 5-point scale (0 to 4) by moving the red circle on the evaluation axis using the trackball. The rated value was defined as the glossiness score. Before starting the experiment, observers were instructed to view the entirety of the object's regions, not simply specific local regions, because it has been suggested that in the absence of such instruction, observers tend to rate glossiness by paying attention only to local regions, such as specular highlight (Kim et al., 2012). Each observer completed eight sessions of 324 trials. They rated all stimuli once in random order throughout the eight sessions. 
Results and discussion
Figure 3a shows the relationship of the glossiness scores in the Dark and Full conditions averaged across the observers. The glossiness scores appear to lie around a diagonal line, indicating that they correlated strongly between the two conditions. The comparability of the glossiness scores in the Dark and Full conditions for most images indicates that glossiness perception does not depend solely on specular highlights, and that there are instead other effective cues for glossiness in the regions common across the two conditions—namely, in the low-luminance regions. This supports the generality of dark gloss reported by Kim et al. (2012) across different kinds of stimuli. 
Figure 3
 
(a) Relationship of glossiness scores between the Full and Dark conditions. (b) Relationship of glossiness scores between the Dark and Matte conditions. Each plot shows the glossiness score for an object. The dashed diagonal line shows scores equal between the conditions.
Figure 3
 
(a) Relationship of glossiness scores between the Full and Dark conditions. (b) Relationship of glossiness scores between the Dark and Matte conditions. Each plot shows the glossiness score for an object. The dashed diagonal line shows scores equal between the conditions.
However, the effectiveness of the low-luminance regions in glossiness perception appeared to differ somewhat across the images. We performed a nonparametric bootstrapping test with 10,000 repetitions under the null hypothesis that the rating scores were the same between the Dark and Full conditions to test whether the RMS differences from the diagonal line differed significantly from zero. The results showed that they were significantly larger than zero (p < 0.001), demonstrating that the rating scores differed between the Dark and Full conditions in at least some samples. In general, a number of plots lie under the diagonal line as overall trend. Statistically, a paired t test on differences in glossiness scores between Full and Dark conditions also showed that the mean score in the Full condition significantly (0.14) higher than that in the Dark condition (p < 0.001). This indicates that perceived glossiness was higher in the Full condition than the Dark condition for many object images, suggesting the contribution of specular highlights to the perceived glossiness of these images. This trend is intuitively plausible, considering the major effect of specular highlights suggested by previous studies (e.g., Marlow et al., 2012). However, other plots show the apparent opposite trend: Glossiness scores seem higher in the Dark condition than in the Full condition. To test whether scores in the Dark condition are significantly higher than those in the Full condition, we performed a one-sided two-sample t test with the Holm method for each of the samples (1,296 samples). The results found no significance for all the samples. Nevertheless, the scores in these samples were at least comparable between the Dark and Full conditions. These plots indicate that perceived glossiness of certain object images was not impaired despite the absence of specular highlights. Although this trend seems strongly counterintuitive based on the well-known impacts of specular highlights, it is reasonable to suppose that the effectiveness of low-luminance regions on glossiness perception was comparable with, or larger than, that of the specular highlights, at least for certain object images. 
The comparison of glossiness scores with diffuse-only stimuli is also important to validate the effects of low-luminance specular components; if they contribute to perceived glossiness, stimuli with only diffuse components should give little glossiness different from the Full and Dark conditions. We addressed this issue in an additional rating experiment. The stimuli were the same as those in the Full condition except that the specularity level was fixed at zero. This condition was referred to as the Matte condition. The mean luminance and the RMS contrast were equalized across the images at 11.8 cd/m2 and 0.70, respectively. Four men, including one of the authors (HK), and a woman participated as observers. The other procedures were the same as the main experiment, though the sessions of an additional rating experiment were conducted separately from the original experiment. The relations of glossiness rating scores between the Dark and Matte conditions are shown in Figure 3b. Most of the scores in the Matte condition were very close to zero, in contrast to the Dark condition. This indicates that the low-luminance specular components are required to perceive glossiness in the Dark condition. 
There is a possibility that our results were contaminated by artifacts of image properties in our stimuli. Some object images had black regions on their bottom side, as shown in the reference images in Figure 1. This black region reflects a dead angle of the illumination-map images. However, in an additional experiment we confirmed that this black region does not appear to affect observers' responses. Details of this additional experiment are described in Appendix B
To summarize, the contribution of low-luminance regions to perceived glossiness, reported by Kim et al. (2012), was confirmed in most images in our study. However, the effectiveness of low-luminance regions on glossiness perception appeared to differ across the samples; the strategy of the visual system for perceiving glossiness may vary depending on some features of object images. 
Relationships of perceived glossiness with image features and rendering parameters
Our second aim was to determine which image features act as cues for dark gloss. In this section, we describe the relationship of perceived glossiness with image features obtained from the stimuli, and with rendering parameters used to generate the stimuli. 
Definition of highlight dependency
The contribution of cues in the low-luminance regions to perceived glossiness may be more evident when observers do not rely on specular highlights in making their judgment. Thus, before analyzing image features in the stimulus, we defined an index of highlight dependency (HD), representing the dependence of glossiness perception on specular highlights. Because the presence or absence of specular highlights differentiated the stimuli in the Full and Dark conditions in our experiment, the differences in glossiness scores between these conditions should reflect the effectiveness of the specular highlights on glossiness perception. Therefore, HD for each image was calculated according to the following equation:  
\begin{equation}\tag{2}{ {{\rm{HD}} = {G_{{\rm{full}}}}-{G_{{\rm{dark}}}},} } \end{equation}
where Gfull and Gdark are glossiness scores in the Full and Dark conditions, respectively. A low HD indicates that the contribution of highlights to perceived glossiness was small; in other words, the contribution of low-luminance regions should have been strong. There were large variations in HD values across the images, as expected from Figure 3a; for instance, the minimum and maximum HD values for all the object images were −0.99 and 1.73, respectively. Some images with low and high HD values are shown in Figure 4. Readers may find that the differences in perceived glossiness between the Dark and Full conditions appear much larger in high-HD images than in low-HD images. More examples of low- and high-HD images are shown in Appendix C.  
Figure 4
 
Example samples with the (a) lowest and (b) highest highlight dependency values. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure 4
 
Example samples with the (a) lowest and (b) highest highlight dependency values. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Image features
We measured various types of image features from the stimulus images. Several image-related features in the low-luminance regions may act as cues for perceived glossiness. First is the psychological recognizability of surrounding environments in specular reflection components (hereafter, reflection-image recognizability). Clearly, reflection-image recognizability on object surfaces should be strongly tied to perceived glossiness (e.g., Hunter, 1937). In addition, reflection-image recognizability has been reported to be derived even from low-luminance regions (Fleming et al., 2003). Thus, it was valuable to examine the direct effects of reflection-image recognizability on the glossiness scores in our experiment. 
Second, we examined luminance edges. Glossy surfaces with high specular reflectance and small roughness typically contain more luminance edges, attributable to specular reflection components such as reflected images of surroundings, than do matte surfaces, as shown in Figure 5. The number and contrast of luminance edges, which are also present in low-luminance regions, may indirectly indicate reflection-image recognizability, and may thus act as a heuristic cue for perceived glossiness, even though it is a critical and unclear issue how observers distinguish specular-derived edges from other types of edges. The possibility that luminance edges act as a cue for glossiness has been raised in previous research (Kim et al., 2016). 
Figure 5
 
Examples of stimulus images in the Matte and Dark conditions and their luminance edges extracted with a Laplacian filter.
Figure 5
 
Examples of stimulus images in the Matte and Dark conditions and their luminance edges extracted with a Laplacian filter.
To test the effectiveness of these potential cues, we measured some image features: a) reflection-image recognizability and b) luminance edges. Additionally, because specular highlights should have had a strong impact in the Full condition, we also measured c) several image features relating to specular highlights (e.g., Marlow et al., 2012) to analyze the results in the Full condition. Luminance edges were calculated from image analysis; the other features were measured in psychological experiments. 
Reflection-image recognizability
Reflection-image recognizability is difficult to calculate from image analysis directly. Therefore, we measured it psychologically for all the object images. 
The apparatus used was the same as in the main experiment. Eight male and two female observers participated in the experiment. One male participant was an author (HK) of the present study. All observers had normal or corrected-to-normal visual acuity. The test stimuli were the same as in the main experiment, while the reference stimuli were slightly different. The specularity level was fixed at 0.5, whereas only surface roughness differed among the five reference stimuli in the same range as the main experiment. The objects were captured by a camera with an obliquely downward position. 
The procedure was also similar to the main experiment. The observers rated how easily they could estimate the surrounding environment from each object image, with reference to the reference stimuli, on a 5-point scale. The same evaluation axis as in the main experiment was used for the responses. The rating task was performed for the stimuli of both the Dark and Full conditions. Each observer performed eight sessions of 324 trials. The order of the test stimuli in a session was randomly determined. 
The rated values were averaged across all observers, and these average values were defined as an index of reflection-image recognizability. 
Luminance edge number
Luminance edges were extracted from all the test stimuli according to the following steps. First, we applied a 5-pixel × 5-pixel Laplacian filter to the luminance images of the test stimuli, which had been smoothed with a two-dimensional Gaussian filter to avoid detection of microscopic noise as edges. The two-dimensional Gaussian function g was defined as  
\begin{equation}\tag{3}{ {g\left( {dx,dy} \right) = {1 \over {2\pi {\sigma ^2}}}\exp \left\{ {{{ - \left( {d{x^2} + d{y^2}} \right)} \over {2{\sigma ^2}}}} \right\},} } \end{equation}
where dx and dy are the distance from the pixel of interest on the horizontal and vertical axis, respectively, and σ is the standard deviation, which was fixed at 1. Luminance edges were then extracted using the zero-cross method from the output of the Laplacian filter. The threshold of edge detection was 5 units in luminance, meaning edge pixels were defined as pixels for which the Laplacian filter outputted the opposite sign to those for adjacent pixels, and which were more than 5 units lower or higher than adjacent pixels. We defined the number of edge pixels as the luminance edge number. Please note that this procedure extracts both specular-derived edges and other types of edges.  
Highlight-related features
Various types of specular highlight features on still object images have been suggested to relate to perceived glossiness, such as contrast, sharpness, coverage, and depth from binocular disparity (Marlow et al., 2012). We psychologically measured three highlight-related features—contrast, sharpness, and coverage—for all the object images in the Full condition. 
The experimental procedures were very similar to those for reflection-image recognizability. Seven male and two female observers, including an author (HK) of the present study, participated in the experiment. Only the reference stimuli differed from the reflection-image recognizability experiment. The objects of the reference stimuli were illuminated by a square area lamp instead of the illumination maps; this lamp was used to control the highlight-related features. The experiment included three tasks to rate the contrast, sharpness, and coverage of specular highlights. In the contrast and sharpness rating tasks, reference stimuli with five levels of the c and d parameters suggested by Ferwerda et al. (2001) were used. These parameters roughly represent different psychophysical dimensions of perceived glossiness and can be controlled using surface roughness and contrast between specular and diffuse reflectance. The parameters were set independently for the two tasks: For the contrast rating task, (c, d) = (0.1, 0.9), (0.2, 0.9), (0.3, 0.9), (0.4, 0.9), and (0.5, 0.9), and for the sharpness rating task, (c, d) = (0.48, 0.0), (0.48, 0.58), (0.48, 0.83), (0.48, 0.93), and (0.48, 0.97). In the coverage rating task, reference stimuli with five levels of highlight area, controlled by the size of the square area lamp, were used. Four of the reference stimuli were created with lamps of differing side lengths (1, 5, 10, or 15 m) in the Blender modeling space. The other reference stimulus was an image of an object with a specular reflectance of 0 illuminated by a square area lamp with a side of 10 m, and thus had no specular highlights. All reference stimulus images were captured by a camera with an obliquely downward position. 
In the three tasks, observers evaluated the contrast of highlights to the object body (contrast task), the perceived clarity of the highlight edges (sharpness task), and the perceived size of the highlights (coverage task). Prior to the experiment, we were concerned about confusion between highlight contrast and brightness in the contrast task. To avoid such confusion, we carefully instructed observers regarding the difference between contrast and brightness by showing some example images. The rating tasks were performed only for the images in the Full condition, as the stimuli in the Dark condition had no specular highlights. The observers performed 24 sessions in total (3 tasks × 8 sessions). 
The rating scores were averaged across all observers in each task. These averaged values were employed as indices of the three features of specular highlights. 
Simple correlation of image features with glossiness scores
To identify relationships among the image features used in the multiple regression analysis, we calculated simple correlations among them. The coefficients are shown in Figure 6. Most features appear to correlate with each other moderately. However, luminance edge number and reflection-image recognizability, which may arise commonly from mirrored reflection on object surfaces, exhibited only weak correlation. The moderate and weak correlations between these features suggest that they do not necessarily reflect the same information about the object surface qualities. 
Figure 6
 
Correlation coefficients between the image features.
Figure 6
 
Correlation coefficients between the image features.
We then examined correlations between glossiness scores and each image feature separately in the Dark and Full conditions. Figure 7a and 7b show the correlation coefficients in the Dark and Full conditions, respectively. In the Dark condition, only the coefficients of luminance edge number and reflection-image recognizability are shown, because we did not measure specular-highlight-related features in the Dark condition. As shown in Figure 7a, both luminance edge number and reflection-image recognizability correlated significantly with the glossiness scores in the Dark condition. Similarly, as shown in Figure 7b, all the features correlated significantly with the glossiness scores in the Full condition. These results indicate that all the image features measured can be considered candidate cues for glossiness perception. 
Figure 7
 
Correlation coefficients between glossiness score and each image feature in (a) the Dark condition and (b) the Full condition. Asterisks indicate statistical significance of the correlation according to a t test.
Figure 7
 
Correlation coefficients between glossiness score and each image feature in (a) the Dark condition and (b) the Full condition. Asterisks indicate statistical significance of the correlation according to a t test.
Multiple regression analysis with image features
The relative strengths of the relationships between glossiness scores and image features are informative in inferring the strategy of the visual system for perceiving glossiness, but cannot be examined adequately by the simple correlation analysis described in the previous section. To compare the relative impacts among the features, we further examined the relationship between glossiness and image features using standardized multiple regression analysis. 
In analyzing the relationship, it is likely that the effectiveness of the features for glossiness perception depends on the images. Particularly, the effects of HD on the effectiveness of the features are intriguing. The HD values are considered to represent the effectiveness of low-luminance components for perceived glossiness relative to specular highlights; dark gloss is considered to be mainly observed on low-HD images. Therefore, regression analysis performed separately for different levels of HD is crucial to clarify image features present in low-luminance components that contribute to glossiness. 
Here, we adopted the sliding-window paradigm. In this paradigm, regression analysis is performed repeatedly while changing the target data to be analyzed step by step according to a certain continuous parameter (for details of the sliding-window paradigm, see Schulz & Huston, 2002; Nagai et al., 2015). In our case, the stimulus images to be analyzed were changed according to HD; only the images in a small range (the window) of HD were analyzed at each step. The width of the window was 0.4 units of HD, and the median HD of the window (i.e., the central point of the window) was changed from −0.79 to 1.53 in intervals of 0.01, for a total of 233 windows. The number of stimuli in each window is shown in Figure 8. We performed standardized multiple regression analysis in each window. From this analysis, variations in the relationship between the image features and glossiness score along HD could be visualized through the resulting regression coefficients. The regression coefficients for low-HD images, in which specular highlights were not influential, were of particular interest. 
Figure 8
 
Number of stimuli in each window used in the sliding-window analysis. The horizontal axis shows the median highlight dependency within each window range, and the vertical axis shows the number of stimuli in each window.
Figure 8
 
Number of stimuli in each window used in the sliding-window analysis. The horizontal axis shows the median highlight dependency within each window range, and the vertical axis shows the number of stimuli in each window.
Figure 9a shows the standardized partial regression coefficients calculated from the multiple regression analysis with the sliding-window paradigm in the Dark condition. The highlight-related features were excluded from the analysis as in Figure 7a. For the lowest HD window, in which low-luminance regions should have had relatively strong impacts on perceived glossiness, the regression coefficient of luminance edge number was 0.67, significantly larger than 0 (p < 0.001). However, the regression coefficients of luminance edge number gradually decreased with increased HD, and finally reached around zero. These results suggest that luminance edges are effective cues for perceived glossiness for images in which specular highlights are not effective. In contrast, the coefficients of reflection-image recognizability exhibited a roughly opposite trend: They were very small for the lowest HD window, but gradually increased with HD. The partial regression coefficients were 0.09 (p = 0.26) and 0.42 (p = 0.18) at the lowest and highest HD, respectively. These results imply that luminance edges, not reflection-image recognizability, are a dominant cue for glossiness for low-HD images, in which low-luminance regions should have been influential on glossiness perception. Therefore, the larger coefficient of reflection-image recognizability in Figure 7a is considered to mainly reflect the properties of high-HD images. Considering these results, in the low-HD images observers may have perceived glossiness directly from the luminance edges without recognizing mirrorlike reflections on the object surface. 
Figure 9
 
Standardized partial regression coefficients of different image features for glossiness scores according to multiple regression analysis in (a) the Dark condition and (b) the Full condition. Line colors denote image features.
Figure 9
 
Standardized partial regression coefficients of different image features for glossiness scores according to multiple regression analysis in (a) the Dark condition and (b) the Full condition. Line colors denote image features.
Figure 9b shows the regression coefficients for the images in the Full condition. Whereas the coefficients of reflection-image recognizability were much smaller than in Figure 9a, the coefficients of luminance edge number showed a similar trend to that in Figure 9a: In low-HD windows, the coefficient of luminance edge number was largest among all image features, but decreased dramatically along with increased HD. In contrast, the coefficients of the highlight-related features were moderate for most HD windows, except for low HDs. These results suggest two important aspects regarding the effects of the image features. First, in accordance with our prediction, the effectiveness of image cues for glossiness perception changed along with HD. Second, luminance edges may provide more effective information for glossiness perception than the specular highlights on several object images even if they have specular highlights, as shown for the largest coefficients of luminance edge number for low-HD windows. Luminance edges seem to contribute more to glossiness than specular highlights, depending on the types of images. 
To summarize, luminance edges and reflection-image recognizability are candidate cues for dark gloss. In particular, luminance edges seem an influential cue for object images in which specular highlights do not have a strong impact on perceived glossiness. Furthermore, the regression coefficients of luminance edge number and reflection-image recognizability exhibited completely different trends, suggesting that luminance edges and reflection-image recognizability are separate cues for glossiness perception. Even for stimuli with specular highlights (the Full condition), the regression coefficients for luminance edge number were larger than highlight-related features for images in which low-luminance regions were effective. These results suggest that the visual system relies on different image features—not only specular highlights, but also luminance edges and reflection-image recognizability—to perceive glossiness, depending on types of image properties. 
Multiple regression analysis with rendering parameters
It may also be meaningful to examine the relationship of glossiness scores to the rendering parameters, as well as to image features. Although the observers, of course, cannot access these parameters from the retinal images directly, parameters correlated with the glossiness scores may provide for inferences about essential image cues for glossiness perception. 
We also analyzed correlations between glossiness scores and each rendering parameter with the sliding-window paradigm. Since some of these parameters are not continuous variables, we did not employ Pearson's product-moment correlation. Instead, Spearman's rank correlations were calculated for shape frequency, strength, roughness, and specularity level, and correlation ratios were calculated for illumination map and camera position. The results are shown in Figure 10. Specularity level and surface roughness were much more strongly correlated with dark gloss than the other parameters; high specularity level and low surface roughness led to high glossiness scores for low-HD images. In addition, such correlations with glossiness scores were observed only for low-HD images, not for high-HD images. High-specularity and low-roughness surfaces are considered to lead to clear mirrorlike reflections of the surroundings. Therefore, image features relevant to mirrorlike reflections, such as specular-derived luminance edges, may be important for dark gloss but not for glossiness depending on specular highlights. 
Figure 10
 
Correlation coefficients of rendering parameters for glossiness scores obtained from sliding-window analysis in (a) the Dark condition and (b) the Full condition. Line colors denote rendering parameters, shape frequency indicates spatial frequency of cloud-pattern textures, and strength indicates strength of displacement values for three-dimensional object shapes.
Figure 10
 
Correlation coefficients of rendering parameters for glossiness scores obtained from sliding-window analysis in (a) the Dark condition and (b) the Full condition. Line colors denote rendering parameters, shape frequency indicates spatial frequency of cloud-pattern textures, and strength indicates strength of displacement values for three-dimensional object shapes.
In summary, these correlations of glossiness scores with rendering parameters do not conflict with the idea that luminance edges contribute to certain object images. 
General discussion
Generality of dark gloss
The first aim of the present study was to clarify the generality of dark gloss, reported by Kim et al. (2012), across different object images. To do this, we conducted a glossiness rating experiment using CG images generated with a broad range of rendering parameters as stimuli. Two conditions regarding specular highlights on the object surfaces were presented: the Full condition, in which specular highlights were present naturally, and the Dark condition, in which specular highlights were replaced with diffuse reflection components. The results demonstrated that glossiness scores between the Full and Dark conditions were comparable for most of the stimulus images, suggesting the generality of dark gloss across different types of object images. Most previous studies of glossiness perception have been focused mainly on the roles of specular highlights, not low-luminance components (e.g., Beck & Prazdny, 1981; Berzhanskaya, Swaminathan, Beck, & Mingolla, 2005; Marlow et al., 2012; van Assen, Wijntjes, & Pont, 2016). Our present findings strongly suggest that future studies must take into account the role of low-luminance components to fully understand the mechanisms of glossiness perception. 
However, the effectiveness of the low-luminance regions on glossiness perception was not constant across images. In our results, the differences in glossiness scores between the Dark and Full conditions exhibited large variations across the image samples, as shown in Figure 3. Most of the images in the Dark condition appeared somewhat less glossy than in the Full condition, whereas glossiness scores for some images in the Dark condition were comparable with those in the Full condition. This indicates that the effectiveness of low-luminance regions on glossiness perception relative to specular highlights depends on some physical or image properties of a stimulus. These candidate properties will be discussed later (see Stimulus conditions correlated with highlight dependency and luminance edge effectiveness). 
Luminance edges as image cues for dark gloss
Our second aim was to identify the image features that act as cues for dark gloss. To address this, we examined the relationship between glossiness scores and different types of image features. As described already, differences in glossiness scores between the Full and Dark conditions depended largely on the stimulus images. These differences can be considered an index of the effectiveness of highlights on glossiness perception, which we referred to as highlight dependency (HD). The results of multiple regression analysis for the Dark condition (Figure 9a) showed that among the tested image features, the regression coefficients of luminance edge number for glossiness scores were the highest for object images with low HD, for which specular highlights appeared not to contribute to glossiness perception, with low-luminance regions being more effective instead. These results suggest an impact of luminance edges as a cue for dark gloss in certain object images. This is in line with the suggestion of Kim et al. (2012; Kim et al., 2016) that luminance edges are a potential cue for glossiness. Additionally, the regression coefficients for edge number showed similar trends in both the Full and Dark conditions, as shown in Figure 9a and 9b, suggesting that luminance edges may be effective regardless of the presence or absence of specular highlights. 
What physical properties of glossy objects do the luminance edges reflect? Previous studies have reported that the human visual system estimates surface material properties based on simple image features. Some have suggested effects of pixel-based luminance statistics or subband image statistics (Motoyoshi et al., 2007; Motoyoshi, 2010; Sawayama, Adelson, & Nishida, 2017), whereas others have suggested the significance of appearance-related image features, such as several properties of specular highlights (e.g., perceptual contrast, sharpness, and coverage; Ferwerda et al., 2001; Marlow et al., 2012), and interactions between image features and three-dimensional object shapes (Kim et al., 2011; Marlow et al., 2011; Marlow & Anderson, 2016) as heuristics, instead of computing object reflectance properties in an inverse-rendering fashion. Similarly, luminance edges are also considered heuristics for some physical properties of glossy objects. Prior to our experiment, we considered luminance edges to be an image feature reflecting how mirrorlike object surfaces appear, as suggested by Kim et al. (2016). In the results for the Dark condition, however, reflection-image recognizability contributed very little to glossiness scores for object images with low HD, unlike luminance edge number, as shown in Figure 9a. In addition, the variation in trends of regression coefficients along with HD differed completely between luminance edge number and reflection-image recognizability. The apparent dissociation of these two features implies that luminance edges and reflection-image recognizability are essentially different cues for glossiness perception. Even if the physical sources of luminance edges are mirrored reflections, the visual system may directly perceive glossiness based on the luminance edges without recognizing what environments are reflected on the surface. However, particularly for objects with richer diffuse components than specular components, the simple number of luminance edges should also depend strongly on the three-dimensional shape of objects: The more complex the object shape, the more luminance edges the object images should have. Therefore, extracting only the luminance edges caused by mirrored reflections of the surroundings might better explain perceived glossiness. 
It should be noted that the contribution of luminance edges to glossiness should be highly stimulus dependent. In our results, luminance edges were not effective for all stimulus images; the regression coefficients of luminance edge number for glossiness scores gradually decreased with increases in HD. This trend suggests that the visual system may rely on different image cues for glossiness perception depending on some image or physical features that vary with HD. This point will be discussed in more detail in the next section. 
Finally, it should be carefully discussed whether luminance edges themselves are essential for dark gloss. Our results showed that glossiness scores were correlated positively with specularity and negatively with roughness for low-HD samples (Figure 10). This raises a possibility that luminance edges can be a cue for glossiness only when they reflect mirror-likeness. In other words, perception of mirrorlike reflections may be more essential for dark gloss than are luminance edges themselves, even though they should be highly correlated in our stimuli. This possibility cannot be checked from our results. For instance, the reflection-image recognizability we measured in this study does not indicate mirror-likeness directly, because we perceive mirrorlike reflections on the surfaces even if we don't recognize what environments are reflected there clearly. Therefore, to check this possibility in future studies, other sets of experiments are necessary in which stimuli with decorrelated mirror-likeness and luminance edges are used, by carefully selecting stimulus sets for instance, and glossiness and mirror-likeness are psychophysically measured. 
Stimulus conditions correlated with highlight dependency and luminance edge effectiveness
Highlight dependency
Image features relating to HD, the difference in rating scores between the Full and Dark conditions, are an important remaining issue, because HD seems to relate to two aspects of glossiness perception: the dependence on low-luminance components and the effectiveness of luminance edges. Therefore, we attempted to clarify which image features are strongly related to HD through simple correlation analysis. 
First, we calculated correlation coefficients between HD and each of five image features in the Full condition, such as luminance edge number, obtained earlier. The results are shown in Figure 11a. Although the correlations were statistically significant for luminance edges, highlight contrast, and highlight coverage, the coefficients were not very large for any feature. This suggests that these features cannot solely explain HD. We also calculated correlation coefficients of HD with pixel-based and subband (2, 4, 8, 16, and 32 c/image) image statistics, such as standard deviation, skewness, and kurtosis. In the subband analysis, Gaussian band-pass filters with a bandwidth of 1 octave were used to decompose original images into subband images. The results are shown in Figure 11b. Similarly, the coefficients were statistically significant for many image statistics, but not very large (the maximum correlation coefficient was 0.29 for standard deviations of 32 c/image). Therefore, variation in HD does not seem to be explained by the factors we studied in our analysis. 
Figure 11
 
(a) Correlation coefficients between highlight dependency and image features in the Full condition, obtained from the image features measurement. (b) Correlation coefficients between highlight dependency and subband (2, 4, 8, 16, and 32 c/image) image statistics in the Full condition. Blue bars represent luminance statistics of the original image, and the reddish bars show subband statistics with different central frequencies: sd = standard deviation, skew = skewness, kurt = kurtosis.
Figure 11
 
(a) Correlation coefficients between highlight dependency and image features in the Full condition, obtained from the image features measurement. (b) Correlation coefficients between highlight dependency and subband (2, 4, 8, 16, and 32 c/image) image statistics in the Full condition. Blue bars represent luminance statistics of the original image, and the reddish bars show subband statistics with different central frequencies: sd = standard deviation, skew = skewness, kurt = kurtosis.
Subsequently, we examined the relationships of the physical parameters used for rendering to HD. We calculated the content ratio of each rendering parameter value. This content ratio was defined as the ratio of the stimulus number with each rendering parameter value to the number of all stimuli in each window. As an example, the relationship of HD to specularity level, which exhibited the most salient results of all the parameters, is shown in Figure 12. The results for the other parameters are shown in Appendix D. In Figure 12, it is clear that the ratio of images with the high specularity level of 1.0 dramatically increases with decreases in HD—that is, specularity level showed a clear negative correlation with HD. Image features relevant to specular reflections, such as luminance edges of mirrored reflections, should be abundant in both the high- and low-luminance regions of object images created with the high specularity level. In this case, even if the specular highlights were removed, the remaining features in the low-luminance regions might contribute to glossiness. In contrast, image features relevant to specular reflections, especially those in low-luminance regions, are considered relatively weak for object images created with low specularity levels, and therefore the images retain only limited cues for perceived glossiness, such as highlight-related features. In this case, the specular highlights may govern glossiness judgment for these object images. With regard to this point, Sawayama and Nishida (2018) have also reported a similar phenomenon (see their figure 22): By lowering the intensity of specular highlights on object images consisting of diffuse and specular components (that is, rather low-specularity object images in our case), the objects appear as matte surfaces. However, this interpretation about the relevant image features seems to be inconsistent with positive correlations of luminance edge number and standard deviations of 32 c/image with HD value in Figure 11a and 11b. This discrepancy may be caused by our simplistic method of luminance edge extraction and spatial frequency analysis: The edge numbers and 32 c/image contrast must contain both specular-derived and diffuse-derived components, and therefore they should not reflect the specularity level clearly. More sophisticated features related to specularity level should be involved in determining HD. 
Figure 12
 
Ratio of specularity levels for different highlight-dependency windows. The horizontal axis shows the median highlight dependency, and the vertical axis shows the content ratio of three specularity levels for each window. The line colors denote specularity level.
Figure 12
 
Ratio of specularity levels for different highlight-dependency windows. The horizontal axis shows the median highlight dependency, and the vertical axis shows the content ratio of three specularity levels for each window. The line colors denote specularity level.
To summarize the results shown in Figures 11 and 12, HD—the effectiveness of low-luminance regions relative to specular highlights for glossiness perception—seems related to the intensity of specular components relative to diffuse components, rather than our original candidate features and subband statistics directly. In the visual system, image features reflecting their relative intensities, such as certain luminance edges or high spatial frequency components derived from specular components, may possibly be utilized to determine the dependence on low-luminance regions. 
Luminance edge effectiveness
What factor determined the effectiveness of luminance edges among different object images? Figure 12 clearly shows that HD decreases with higher specularity levels. In addition, the regression coefficients of luminance edges were high only for low-HD images in Figure 9. Considering these facts, the effectiveness of luminance edges may also depend on the specularity level—the relative intensities of diffuse and specular reflection components. 
To test this possibility, we performed the same sliding-window regression analysis as before, separately for different specularity levels (low, medium, and high). The results are shown in Figure 13. Please note that there were no images of low specularity level in the lowest HDs. In low HDs, the regression coefficients of the luminance edges are the highest among the features in high and medium specularity, similar to the original analysis in Figure 9, but not for the low-specularity condition. These results suggest that the luminance edges may contribute to perception only for objects with intense specular components as compared with diffuse components. This difference in relative intensity of specular and diffuse components corresponds to, for instance, glass and metal with weak diffuse components, and plastic and porcelain with strong diffuse components. Considering Figure 13, luminance edges should work well for the former materials but not for the latter. 
Figure 13
 
Standardized partial regression coefficients of image features for glossiness scores according to multiple regression analysis for each specularity level: Dark condition—(a) low, (b) medium, and (c) high specularity; Full condition—(d) low, (e) medium, and (f) high specularity. Line colors denote image features. Broken lines denote highlight-related features. The highlight dependency windows containing fewer than 10 samples were excluded from analysis. An example image in each condition is also shown inset to each graph.
Figure 13
 
Standardized partial regression coefficients of image features for glossiness scores according to multiple regression analysis for each specularity level: Dark condition—(a) low, (b) medium, and (c) high specularity; Full condition—(d) low, (e) medium, and (f) high specularity. Line colors denote image features. Broken lines denote highlight-related features. The highlight dependency windows containing fewer than 10 samples were excluded from analysis. An example image in each condition is also shown inset to each graph.
To summarize, the relative intensities of diffuse and specular components—our specularity parameter—may be strongly related to the effectiveness of luminance edges in glossiness perception. The factors determining the dependence on low-luminance specular components and the effectiveness of luminance edges seem to be common, to some extent. 
Possible effects of specular highlights in the Dark condition
The possible effects of a highlight-like impression for the images in the Dark condition on perceived glossiness should be considered. The purpose of using the Dark condition was to reduce the dependence of glossiness perception on specular highlights. For this purpose, the highlight regions were removed by replacing them with matte surfaces in our experiment. However, as shown in Figure 2, even the object images in the Dark condition seem to have highlights, perceptually. If highlights were clearly perceived in the stimuli for the Dark condition, they may still have governed glossiness judgments differently from our expectation prior to the experiment. Namely, our results that the glossiness scores were not dramatically different between the Dark and Full conditions may have arisen because of these perceptual highlights in the Dark condition. 
To check this possibility, we compared perceptual specular highlights between the Full and Dark conditions experimentally. The stimuli were 30 arbitrarily chosen object images from the main experiment: 15 images each from the Dark and Full conditions. We asked the observers to draw outlines of perceptual specular highlights on the object surfaces using the mouse cursor. We defined the overlapping area ratio (OAR), an index representing individual differences in perceptual specular highlights, as  
\begin{equation}\tag{4}{ {{\rm{OAR}} = {{{N_o}} \over {{N_t}}},} } \end{equation}
where Display Formula\({N_o}\) is the number of pixels in the common (conjunction) regions of outlines across all observers and Display Formula\({N_t}\) is the number of pixels in “logical sum” regions of the outlines drawn by the observers. We calculated OAR for each stimulus and highlight condition. If the highlights were perceived in the Dark conditions similarly to in the Full condition, OAR should be comparable between the Dark and Full conditions.  
The results demonstrated a mean OAR of 4.7% in the Dark condition compared to 11.1% in the Full condition, indicating that individual differences in perceived specular highlights were larger in the Dark condition. In addition, a mean Display Formula\({N_o}\) of 665.8 was found in the Dark condition, significantly smaller than that in the Full condition, 2,582.5 (t test: p < 0.05). Display Formula\({N_o}\) was larger in 14 of the 15 object images in the Full condition compared to the Dark condition. These results suggest that highlight perception was clearly diminished in the Dark condition. Considering that the mean glossiness scores did not substantially differ between the two conditions regardless of the differences in perceived specular highlights, they do not necessarily seem to be a determining factor for perceived glossiness in the Dark condition. In other words, image features other than specular highlights seem to support glossiness perception more strongly in the Dark condition, even though these perceptual specular highlights should still be effective to some extent. 
Limitations and future research
Effectiveness of specular highlights
Specular highlights should have a much stronger impact on glossiness perception in real environments than were observed for our stimuli. A potential explanation for the weak contribution of the highlight-related features in our experiment is that our stimuli lacked important image features in the specular highlights. For instance, they lacked binocular disparity and motion information unique to specular highlights, which many previous studies have suggested are effective for glossiness perception (Wendt et al., 2008; Sakano & Ando, 2010; Wendt et al., 2010; Marlow et al., 2012; Tani et al., 2013). Additionally, the luminances of specular highlights in our stimuli, with a maximum of 79.9 cd/m2, were much lower than for real objects because of the luminance limitations of our display. If high-luminance values were presented within stimulus images using high-dynamic-range displays, the contributions of the highlights would likely be much more prominent than were observed in our experiment. In summary, the contributions of specular highlights may have been undeservedly understated in our study. 
Computational mechanisms
One of the crucial remaining issues is the computational algorithms used to extract from object images the “specular-derived” luminance edges, which may be relevant to HD and the effectiveness of luminance edges for glossiness perception. As shown already, the effectiveness of luminance edges and HD are supposed to be determined, at least partly, by common image features related to specular-derived edges. Therefore, our visual system should have to dissociate specular-derived edges from other types of luminance edges, such as ones created from diffuse reflection components according to three-dimensional shapes. Clearly, this dissociation cannot be achieved by simple spatial-frequency filtering, with more sophisticated computations likely to be involved. Although to our knowledge there have been no direct reports about how the human visual system extracts only specular-derived edges, some previous studies have reported image features relevant to the distinction of specular and diffuse components. It has been suggested that our visual system utilizes magnitudes of luminance gradients as a cue for detecting specular reflections (Sawayama & Nishida, 2018). Similarly, albedo textures and specular reflections can be distinguished on the basis of luminance orientation fields (Kim et al. 2011; Marlow et al., 2011; Marlow & Anderson, 2016). This suggests a possibility that luminance gradients and orientation fields around luminance edges may help distinguish specular-derived edges and other types of edges. It is our future work to examine the computational algorithms to dissociate the different types of luminance edges. 
Causality of factors for glossiness perception
It should be noted that the current finding that luminance edges are relevant to glossiness perception implies only a correlation, not a causal relationship. To clarify whether luminance edges are crucial for glossiness perception depending on low-luminance regions, further experiments using stimuli with image-based manipulations will be necessary. For example, it would be valuable to investigate the effects on perceived glossiness of artificially manipulating luminance edges in matte object images, such as emphasizing contrast and adding artificial luminance edges. In artificially manipulating luminance edges, some constraints on the spatial structures of the luminance edges must be considered. For instance, the spatial relationship between specular and diffuse reflection components may be crucial, since it has been reported that specular highlights must be spatially congruent with surface shading patterns for glossiness to be perceived (Kim et al., 2011; Marlow et al., 2011). Because the luminance edges we focused on were derived from specular reflection components, similar congruence with shading patterns may be required for glossiness perception. In addition, considering the relationship between the effectiveness of luminance edges and HD in Figure 9, the effectiveness of luminance edge components on glossiness perception should be judged computationally before the application of luminance edge manipulations to make them effective. 
Acknowledgments
This study was supported by JSPS KAKENHI 15K00372, 16H01658, and 18H04996 to TN. 
Commercial relationships: none. 
Corresponding author: Takehiro Nagai. 
Address: Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan. 
References
Anderson, B. L., & Kim, J. (2009). Image statistics do not explain the perception of gloss and lightness. Journal of Vision, 9 (11): 10, 1–17, https://doi.org/10.1167/9.11.10. [PubMed] [Article]
Beck, J., & Prazdny, S. (1981). Highlights and the perception of glossiness. Perception & Psychophysics, 30 (4), 407–410.
Berzhanskaya, J., Swaminathan, G., Beck, J., & Mingolla, E. (2005). Remote effects of highlights on gloss perception. Perception, 34 (5), 565–576.
Blender Foundation. (2016). Blender: A 3D modelling and rendering package (Version 2.77a) [Computer software]. Retrieved from www.blender.org
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436.
Debevec, P. (1998). Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Cohen M. F. (Ed.), Proceedings of ACM SIGGRAPH 1998 (pp. 189–198). New York, NY: ACM.
Ferwerda, J., Pellacini, F., & Greenberg, D. P. (2001). Psychophysically based model of surface gloss perception. Proceedings SPIE Human Vision and Electronic Imaging, 4299, 291–301.
Fleming, R. W., Dror, R. O., & Adelson, E. H. (2003). Real-world illumination and the perception of surface reflectance properties. Journal of Vision, 3 (5): 3, 347–368, https://doi.org/10.1167/3.5.3. [PubMed] [Article]
Heasly, B. S., Cottaris, N. P., Lichtman, D. P., Xiao, B., & Brainard, D. H. (2014). RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research. Journal of Vision, 14 (2): 6, 1–22, https://doi.org/10.1167/14.2.6. [PubMed] [Article]
Hunter, R. S. (1937). Methods of determining gloss. Journal of Research of the National Bureau of Standards, 18 (1), 19–41.
Jakob, W. (2010). Mitsuba Renderer (Version 0.5.0) [Computer software]. Available from http://www.mitsuba-renderer.org
Kim, J., Marlow, P. J., & Anderson, B. L. (2011). The perception of gloss depends on highlight congruence with surface shading. Journal of Vison, 11 (9): 4, 1–19, https://doi.org/10.1167/11.9.4. [PubMed] [Article]
Kim, J., Marlow, P. J., & Anderson, B. L. (2012). The dark side of gloss. Nature Neuroscience, 15 (11), 1590–1595.
Kim, J., Tan, K., & Chowdhury, N. S. (2016). Image statistics and the fine lines of material perception. i-Perception, 7 (4), 1–11.
Marlow, P. J., & Anderson, B. L. (2016). Motion and texture shape cues modulate perceived material properties. Journal of Vision, 16 (1): 5, 1–14, https://doi.org/10.1167/16.1.5. [PubMed] [Article]
Marlow, P. J., Kim, J., & Anderson, B. L. (2011). The role of brightness and orientation congruence in the perception of surface gloss. Journal of Vision, 11 (9): 16, 1–12, https://doi.org/10.1167/11.9.16. [PubMed] [Article]
Marlow, P. J., Kim, J., & Anderson, B. L. (2012). The perception and misperception of specular surface reflectance. Current Biology, 22, 1909–1913.
Motoyoshi, I. (2010). Highlight-shading relationship as a cue for the perception of translucent and transparent materials. Journal of Vision, 10 (9): 6, 1–11, https://doi.org/10.1167/10.9.6. [PubMed] [Article]
Motoyoshi, I., & Matoba, H. (2012). Variability in constancy of the perceived surface reflectance across different illumination statistics. Vision Research, 53, 30–39.
Motoyoshi, I., Nishida, S., Sharan, L., & Adelson, E. H. (2007, May 10). Image statistics and the perception of surface qualities. Nature, 447, 206–209.
Nagai, T., Matsushima, T., Koida, K., Tani, Y., Kitazaki, M., & Nakauchi, S. (2015). Temporal properties of material categorization and material rating: Visual vs non-visual material features. Vision Research, 115 (B), 259–270.
Sakano, Y., & Ando, H. (2010). Effects of head motion and stereo viewing on perceived glossiness. Journal of Vision, 10 (9): 15, 1–14, https://doi.org/10.1167/10.9.15. [PubMed] [Article]
Sawayama, M., Adelson, E. H., & Nishida, S. (2017). Visual wetness perception based on image color statistics. Journal of Vision, 17 (5): 7, 1–24, https://doi.org/10.1167/17.5.7. [PubMed] [Article]
Sawayama, M., & Nishida, S. (2018). Material and shape perception based on two types of intensity gradient information. PLoS Computational Biology, 14 (4): e1006061.
Schulz, D., & Huston, J. P. (2002). The sliding window correlation procedure for detecting hidden correlations: Existence of behavioral subgroups illustrated with aged rats. Journal of Neuroscience Methods, 121 (2), 129–137.
Tani, Y., Araki, K., Nagai, T., Koida, K., Nakauchi, S., & Kitazaki, M. (2013). Enhancement of glossiness perception by retinal-image motion: Additional effect of head-yoked motion parallax. PLoS One, 8 (1): e54549.
Thompson, W., Fleming, R., Creem-Regehr, S., & Stefanucci, J. (2011). Visual perception from a computer graphics perspective. Wellesley, MA: CRC Press.
van Assen, J. J. R., Wijntjes, M. W. A., & Pont, S. C. (2016). Highlight shapes and perception of gloss for real and photographed objects. Journal of Vision, 16 (6): 6, 1–14, https://doi.org/10.1167/16.6.6. [PubMed] [Article]
Ward, G. J. (1992). Measuring and modeling anisotropic reflection. Computer Graphics, 26 (2), 265–272.
Wendt, G., Faul, F., Ekroll, V., & Mausfeld, R. (2010). Disparity, motion, and color information improve gloss constancy performance. Journal of Vision, 10 (9): 7, 1–17, https://doi.org/10.1167/10.9.7. [PubMed] [Article]
Wendt, G., Faul, F., & Mausfeld, R. (2008). Highlight disparity contributes to the authenticity and strength of perceived glossiness. Journal of Vision, 8 (1): 14, 1–10, https://doi.org/10.1167/8.1.14. [PubMed] [Article]
Wiebel, C. B., Toscani, M., & Gegenfurtner, K. R. (2015). Statistical correlates of perceived gloss in natural images. Vision Research, 115 (B), 175–187.
Appendix A: Stimulus images with different rendering parameters
Some examples of the changes in the appearance of rendered object images that accompany changes in physical parameters are shown in Figures A1 through A6. All figures show grayscale images from the Full condition. 
Figure A1
 
Example images created with different specularity levels.
Figure A1
 
Example images created with different specularity levels.
Figure A2
 
Example images created with different surface roughness.
Figure A2
 
Example images created with different surface roughness.
Figure A3
 
Example images created with different cloud-pattern textures.
Figure A3
 
Example images created with different cloud-pattern textures.
Figure A4
 
Example images created with different strength of displacement.
Figure A4
 
Example images created with different strength of displacement.
Figure A5
 
Example images created with different illumination maps.
Figure A5
 
Example images created with different illumination maps.
Figure A6
 
Example images created with different camera positions.
Figure A6
 
Example images created with different camera positions.
Appendix B: Effects of black regions in the object images
Some of the object images with high specularity levels had unnatural black squares on the lower side. This is considered to reflect a dead angle in the illumination map at the lower side of the object surface, created when acquiring the maps. This region is physically incorrect; it is completely black, different from images of dark surfaces would be in the illumination environment. Therefore, our results may have been affected by the artifact of these black regions. We checked this possibility in a supplementary experiment. 
In the supplementary experiment, 100 object images were arbitrarily selected as stimuli from the images in the original experiment (50 stimuli each from the Dark and Full conditions, which had the same rendering parameters between the two conditions), but the black regions in the images were hidden by placing a gray plane with diffuse reflectance of 0.1 under the object. The experimental procedures were the same as in the original experiment. The results demonstrated that the glossiness scores were well correlated between the original and modified stimuli (Full condition: r = 0.80, p < 0.001; Dark condition: r = 0.86, p < 0.001). Furthermore, the highlight dependencies (score differences between the Full and Dark conditions) were also well correlated between the two experiments (r = 0.57, p < 0.001). These results suggest that our original findings were largely unaffected by the unnatural black regions in some stimuli. 
Appendix C: Example image samples for low- and high-highlight dependency windows
Figure C1
 
Example samples from the windows with the (a) lowest and (b) highest highlight dependency. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure C1
 
Example samples from the windows with the (a) lowest and (b) highest highlight dependency. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Appendix D: Change in content ratios for each rendering parameter with highlight dependency
Figure D1
 
Content ratios of roughness values.
Figure D1
 
Content ratios of roughness values.
Figure D2
 
Content ratios of spatial frequency of cloud-pattern textures.
Figure D2
 
Content ratios of spatial frequency of cloud-pattern textures.
Figure D3
 
Content ratios of strength of displacement values.
Figure D3
 
Content ratios of strength of displacement values.
Figure D4
 
Content ratios of illumination maps.
Figure D4
 
Content ratios of illumination maps.
Figure D5
 
Content ratios of camera positions.
Figure D5
 
Content ratios of camera positions.
Figure 1
 
Example stimulus. The top object image is the test stimulus. The bottom five objects are the reference stimuli. Observers rated the perceived glossiness of the test stimulus by moving a red circle on the evaluation axis, shown at the center.
Figure 1
 
Example stimulus. The top object image is the test stimulus. The bottom five objects are the reference stimuli. Observers rated the perceived glossiness of the test stimulus by moving a red circle on the evaluation axis, shown at the center.
Figure 2
 
Examples of test stimuli in the Dark (left) and Full (right) conditions.
Figure 2
 
Examples of test stimuli in the Dark (left) and Full (right) conditions.
Figure 3
 
(a) Relationship of glossiness scores between the Full and Dark conditions. (b) Relationship of glossiness scores between the Dark and Matte conditions. Each plot shows the glossiness score for an object. The dashed diagonal line shows scores equal between the conditions.
Figure 3
 
(a) Relationship of glossiness scores between the Full and Dark conditions. (b) Relationship of glossiness scores between the Dark and Matte conditions. Each plot shows the glossiness score for an object. The dashed diagonal line shows scores equal between the conditions.
Figure 4
 
Example samples with the (a) lowest and (b) highest highlight dependency values. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure 4
 
Example samples with the (a) lowest and (b) highest highlight dependency values. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure 5
 
Examples of stimulus images in the Matte and Dark conditions and their luminance edges extracted with a Laplacian filter.
Figure 5
 
Examples of stimulus images in the Matte and Dark conditions and their luminance edges extracted with a Laplacian filter.
Figure 6
 
Correlation coefficients between the image features.
Figure 6
 
Correlation coefficients between the image features.
Figure 7
 
Correlation coefficients between glossiness score and each image feature in (a) the Dark condition and (b) the Full condition. Asterisks indicate statistical significance of the correlation according to a t test.
Figure 7
 
Correlation coefficients between glossiness score and each image feature in (a) the Dark condition and (b) the Full condition. Asterisks indicate statistical significance of the correlation according to a t test.
Figure 8
 
Number of stimuli in each window used in the sliding-window analysis. The horizontal axis shows the median highlight dependency within each window range, and the vertical axis shows the number of stimuli in each window.
Figure 8
 
Number of stimuli in each window used in the sliding-window analysis. The horizontal axis shows the median highlight dependency within each window range, and the vertical axis shows the number of stimuli in each window.
Figure 9
 
Standardized partial regression coefficients of different image features for glossiness scores according to multiple regression analysis in (a) the Dark condition and (b) the Full condition. Line colors denote image features.
Figure 9
 
Standardized partial regression coefficients of different image features for glossiness scores according to multiple regression analysis in (a) the Dark condition and (b) the Full condition. Line colors denote image features.
Figure 10
 
Correlation coefficients of rendering parameters for glossiness scores obtained from sliding-window analysis in (a) the Dark condition and (b) the Full condition. Line colors denote rendering parameters, shape frequency indicates spatial frequency of cloud-pattern textures, and strength indicates strength of displacement values for three-dimensional object shapes.
Figure 10
 
Correlation coefficients of rendering parameters for glossiness scores obtained from sliding-window analysis in (a) the Dark condition and (b) the Full condition. Line colors denote rendering parameters, shape frequency indicates spatial frequency of cloud-pattern textures, and strength indicates strength of displacement values for three-dimensional object shapes.
Figure 11
 
(a) Correlation coefficients between highlight dependency and image features in the Full condition, obtained from the image features measurement. (b) Correlation coefficients between highlight dependency and subband (2, 4, 8, 16, and 32 c/image) image statistics in the Full condition. Blue bars represent luminance statistics of the original image, and the reddish bars show subband statistics with different central frequencies: sd = standard deviation, skew = skewness, kurt = kurtosis.
Figure 11
 
(a) Correlation coefficients between highlight dependency and image features in the Full condition, obtained from the image features measurement. (b) Correlation coefficients between highlight dependency and subband (2, 4, 8, 16, and 32 c/image) image statistics in the Full condition. Blue bars represent luminance statistics of the original image, and the reddish bars show subband statistics with different central frequencies: sd = standard deviation, skew = skewness, kurt = kurtosis.
Figure 12
 
Ratio of specularity levels for different highlight-dependency windows. The horizontal axis shows the median highlight dependency, and the vertical axis shows the content ratio of three specularity levels for each window. The line colors denote specularity level.
Figure 12
 
Ratio of specularity levels for different highlight-dependency windows. The horizontal axis shows the median highlight dependency, and the vertical axis shows the content ratio of three specularity levels for each window. The line colors denote specularity level.
Figure 13
 
Standardized partial regression coefficients of image features for glossiness scores according to multiple regression analysis for each specularity level: Dark condition—(a) low, (b) medium, and (c) high specularity; Full condition—(d) low, (e) medium, and (f) high specularity. Line colors denote image features. Broken lines denote highlight-related features. The highlight dependency windows containing fewer than 10 samples were excluded from analysis. An example image in each condition is also shown inset to each graph.
Figure 13
 
Standardized partial regression coefficients of image features for glossiness scores according to multiple regression analysis for each specularity level: Dark condition—(a) low, (b) medium, and (c) high specularity; Full condition—(d) low, (e) medium, and (f) high specularity. Line colors denote image features. Broken lines denote highlight-related features. The highlight dependency windows containing fewer than 10 samples were excluded from analysis. An example image in each condition is also shown inset to each graph.
Figure A1
 
Example images created with different specularity levels.
Figure A1
 
Example images created with different specularity levels.
Figure A2
 
Example images created with different surface roughness.
Figure A2
 
Example images created with different surface roughness.
Figure A3
 
Example images created with different cloud-pattern textures.
Figure A3
 
Example images created with different cloud-pattern textures.
Figure A4
 
Example images created with different strength of displacement.
Figure A4
 
Example images created with different strength of displacement.
Figure A5
 
Example images created with different illumination maps.
Figure A5
 
Example images created with different illumination maps.
Figure A6
 
Example images created with different camera positions.
Figure A6
 
Example images created with different camera positions.
Figure C1
 
Example samples from the windows with the (a) lowest and (b) highest highlight dependency. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure C1
 
Example samples from the windows with the (a) lowest and (b) highest highlight dependency. Upper and lower rows in each group show stimuli in the Full and Dark conditions, respectively.
Figure D1
 
Content ratios of roughness values.
Figure D1
 
Content ratios of roughness values.
Figure D2
 
Content ratios of spatial frequency of cloud-pattern textures.
Figure D2
 
Content ratios of spatial frequency of cloud-pattern textures.
Figure D3
 
Content ratios of strength of displacement values.
Figure D3
 
Content ratios of strength of displacement values.
Figure D4
 
Content ratios of illumination maps.
Figure D4
 
Content ratios of illumination maps.
Figure D5
 
Content ratios of camera positions.
Figure D5
 
Content ratios of camera positions.
Table 1
 
Physical parameters in rendering and their values for test stimuli.
Table 1
 
Physical parameters in rendering and their values for test stimuli.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×