Abstract
Shadeworld is an imaginary place populated with opaque surfaces that appear smoothly shaded. The real world is more complex, but Shadeworld contains some of the key properties that make images compelling. Scanning electron microscopy (SEM) images live in Shadeworld even though the math and physics is entirely unlike optical shading. The human visual system loves this kind of image, and microscopists have invented various other methods (e.g., freeze fracture and Nomarski) to provide pseudo-shaded images that are attractive and informative. Images of shaded (and pseudoshaded) surfaces are generated by a rendering process (real or synthetic) that converts the 3D surface to a 2D image via some set of rules. There are many rules that yield a good sense of 3D; these can involve combinations of depth, surface normal, curvature, and other aspects of geometry. Phong shading is an example that is physically impossible but perceptually convincing. Ambient occlusion gives good shading by a very different process. It is remarkable that human vision is so successful at extracting 3D from such a variety of rendering conditions. We argue that the process of extracting 3D involves two estimation problems: (1) estimating the shape and (2) estimating the rendering process (and its parameters). Lambertian shading (estimate albedo and light direction) is famous but rarely occurs. The boundaries of Shadeworld are established by the characteristics of human shape perception. The most successful rendering processes have a kind of smoothness in the mapping between shape and luminance. In addition, the rendering parameters need to be fairly stable across an image, but not completely. We'll also describe some new methods for generating Shadeworld images using physical processes, which can be tailored for human vision, offering new approaches to light microscopy and surface analysis.
Meeting abstract presented at VSS 2014