Abstract
In natural vision, the luminance in each local region of a matte surface is primarily determined by its local orientation relative to the direction of illumination. When the direction of illumination is unknown, it is still possible to obtain compelling perceptions of 3D shape from shading. In such cases, information about the direction of illumination must be estimated from the image shading. For globally convex smoothly curved surfaces, the set of local regions adjacent to smooth occlusions include a range of local orientations. These varied local orientations result in a subsequently varied set of luminance values for most directions of illumination. The present research was designed to investigate the role of shading near smooth occlusions on observers’ 3D shape judgments. Images of shaded matte surfaces were presented together with a set of red and yellow dots that could be moved along a single horizontal scan line with a handheld mouse. Observers were instructed to mark each local depth minimum on the scan line with a red dot and each local depth maximum with a yellow dot. The stimuli consisted of deformed spheres. We examined several different reflectance models and patterns of illumination. The surfaces were presented both in full view or through an aperture that removed the surfaces’ smooth occlusions and nearby regions from view while leaving the center of the images untouched. The results of the experiment revealed that observers' shape judgments are closer to veridical when the full surface is in view, but systematic distortions in the apparent shapes of surfaces occurred when viewed through the aperture. These findings suggest that the shading near smooth occlusions plays an important role in estimating 3D shape from shading and that these regions provide the salient information concerning a surface’s direction of illumination.
Meeting abstract presented at VSS 2013