Abstract
Materials all share the various hues of color, but different image features contribute to the colors of different materials. Which features and how they interact has become clearer, but further insight might be obtained by understanding how observers make color matches across materials, for example matching the color of a matte cone to that of a metallic glossy sculpture. Some investigation has already been done on this topic (Xiao & Brainard, 2008; Giesel & Gegenfurtner, 2010; Granzier, Vergne & Gegenfurtner, 2014), showing that it holds much potential. For example, is color constancy better when materials are more similar in appearance (e.g., is it better when matching a glossy object to a glass object, rather than matching a matte object to a glass object)? With virtual reality headsets, we can simulate 3D environments where observers can manipulate the colors of objects with different materials. We had 7 observers do a web-based virtual reality experiment, where they changed the color of either a matte or glossy cone to match the color of a glass sculpture (red, green, blue, yellow). The illumination was a basic sunsky model that could be blue or yellow (daylight axis). We tested whether the following image statistics predicted observer matches: mean color, color of the brightest region (excluding highlights), most saturated color, and most frequent color. We found that observer matches for the matte object showed some consistency across illuminations, but did not exhibit color constancy, and it was not clear which image statistics observers were using. In contrast, we found that observers exhibit color constancy for glossy matches to a glass sculpture and they used the color of the brightest regions of both objects, excluding the highlights, to make the match. Our results indicate that color constancy performance varies as a function of similarity in material appearance.