August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Can people match optically mixed canonical lighting modes?
Author Affiliations
  • Fan Zhang
    Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology
  • Huib de Ridder
    Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology
  • Sylvia Pont
    Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology
Journal of Vision September 2016, Vol.16, 642. doi:10.1167/16.12.642
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fan Zhang, Huib de Ridder, Sylvia Pont; Can people match optically mixed canonical lighting modes?. Journal of Vision 2016;16(12):642. doi: 10.1167/16.12.642.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We previously developed a material probe, namely MatMix 1.0, and found it to be an intuitive tool to robustly quantify visual material perception (Zhang et al., VSS 2015). Using MatMix 1.0, we found that varying illumination systematically influenced the material perception, depending on the materials (Zhang et al., SPIE 2015). In this study, MatMix 1.0 was adjusted to MixIM 1.0 (Mixing Illumination and Material) and used in a matching experiment to quantify visual perception of canonical lighting modes. Three types of canonical lighting modes (so-called ambient, focus, and brilliance light) and four types of canonical material modes (matte, velvety, specular and glittery) were included. A stimulus image and the probe were shown together. The probe was a linear weighted optical mixture of three basis images under three canonical lightings of one of the canonical materials. Below the probe, three sliders represented the canonical lightings. The appearance of the probe could be changed by the observer moving the sliders and the weight of each lighting mode changed accordingly. The task for the observers was to match the illumination of the material in the probe with that of the material in the stimulus, without time limits. The materials could be the same or different. When materials in the stimulus and the probe were the same, all 8 observers performed far above chance level. When the materials were different, the performance of all observers decreased. 7 of 8 observers still performed far above chance level, except when the velvety mode was present in the stimulus, for which performance was only slightly above chance level. In conclusion, we found that observers could match the illumination of a stimulus and a probe by mixing three canonical lighting modes and that material differences decreased matching performance, especially if a velvety mode was present.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×