Abstract
We previously developed a material probe, namely MatMix 1.0, and found it to be an intuitive tool to robustly quantify visual material perception (Zhang et al., VSS 2015). Using MatMix 1.0, we found that varying illumination systematically influenced the material perception, depending on the materials (Zhang et al., SPIE 2015). In this study, MatMix 1.0 was adjusted to MixIM 1.0 (Mixing Illumination and Material) and used in a matching experiment to quantify visual perception of canonical lighting modes. Three types of canonical lighting modes (so-called ambient, focus, and brilliance light) and four types of canonical material modes (matte, velvety, specular and glittery) were included. A stimulus image and the probe were shown together. The probe was a linear weighted optical mixture of three basis images under three canonical lightings of one of the canonical materials. Below the probe, three sliders represented the canonical lightings. The appearance of the probe could be changed by the observer moving the sliders and the weight of each lighting mode changed accordingly. The task for the observers was to match the illumination of the material in the probe with that of the material in the stimulus, without time limits. The materials could be the same or different. When materials in the stimulus and the probe were the same, all 8 observers performed far above chance level. When the materials were different, the performance of all observers decreased. 7 of 8 observers still performed far above chance level, except when the velvety mode was present in the stimulus, for which performance was only slightly above chance level. In conclusion, we found that observers could match the illumination of a stimulus and a probe by mixing three canonical lighting modes and that material differences decreased matching performance, especially if a velvety mode was present.
Meeting abstract presented at VSS 2016