Abstract
An object’s appearance depends on a complex interaction of shape, illumination, and material. Despite decades of work, it has proven difficult to build computational systems that estimate these components individually. For example, shape from shading systems are still mediocre even when the illumination and BRDF are precisely specified. At the same time, techniques for material classification in the absence of shape remain limited. We have found that it is possible to make material estimates and shape estimates as part of the same process. Our learning based system is trained on multiple shapes and materials. We have a collection of "blobby" 3D shapes and we render them with multiple "styles," where a style captures the combined effects of illumination and BRDF. For a given patch of surface normals (our choice of shape descriptor) there will be multiple renderings that have different appearances. We build a library of pairings between shape patches and image patches, and then try to solve the inverse problem: estimating the surface normals given the image. This works poorly for an isolated patch, because many things in the world could give rise to similar image patches. Things get much better when we impose consistency across the image, using belief propagation. It is possible to make estimates of material by pooling the votes of multiple patches taken individually. However, the performance improves when the voting is done with the patches that were selected to consistently estimate the shape. Thus, while the system is primarily designed to estimate shape, it automatically gives a good estimate of material as well.
Meeting abstract presented at VSS 2012