Abstract
We create images as a means to record and communicate object appearance, and it is self evident that images are generally effective for this purpose. However looking at an image of an object is not the same as looking at the object itself due in part to the technological limitations of imaging devices and media such as resolution and dynamic range. In this project we investigated how well computer-generated images represent the appearance properties of real glossy objects. We started by creating an ordered set of printed gloss samples using a commercial-grade HP Indigo electrostatic printer that can selectively apply matteing layers on top of the colored toners. We then measured the set to determine the bi-directional reflectance distribution functions (BRDFs) of each sample. Next we modeled those BRDFs, and used advanced physically-based rendering techniques to render images of the samples as cylinders in a virtual light booth with an area light source and checkerboard-patterned walls. At the same time, we built a matching physical lightbooth and also created a matching physical sample set by wrapping the original paper samples around plastic cylinders. We then conducted a series of scaling experiments in which we had subjects perform both within and across media matching tasks (real-to-real, image-to-image, image-to-real). The results show that on average, observers are capable of performing the matching task in all three conditions, but that there are significant differences in sensitivity to gloss differences across the three conditions, with real-to-real matching showing the highest sensitivity and cross-media image-to-real matching showing the lowest. This work contributes to our understanding of both gloss perception and image perception, and is part of an ongoing effort to understand how and how well images serve as visual representations of objects and surfaces.
Meeting abstract presented at VSS 2014