Abstract
The traditional approach for computing shape from shading is based on an assumption that the luminance in each local region is determined exclusively by its local orientation relative to the direction of illumination. In this study we utilized a new method for measuring observers' judgments of local surface orientation in order to test this assumption. Observers were shown the image of a 3D surface with a single probe region marked by a small red dot, and they were required to identify another point on the surface that had the same apparent local orientation. Our stimuli depicted a smoothly deformed planar surface at two different slants. Three different types of shading were employed, only one of which satisfied the assumptions of traditional models. On a given trial, a red dot was placed at a random location in a stimulus. A large dashed circle denoted a separate region that did not include the red dot but did include at least one location with the same orientation. Observers used the cursor to place a green dot within the circle at a location that appeared to have the same surface orientation as the location marked by the red dot. The results demonstrate that probe regions with the same apparent orientation may have quite different orientations on the actual depicted surface, and that the perceptually matched probe regions can have large differences in image intensity. This latter finding is incompatible with any algorithm for computing shape from shading that requires a known BRDF with a homogeneous pattern of illumination.
Meeting abstract presented at VSS 2014