Abstract
One of the most common methods for depicting curved surfaces in mechanical drawings and optical art is to cover the surface with a series of parallel contours. When these contours are viewed from an appropriate vantage point, it produces a compelling perception of 3D shape. In an effort to better understand this phenomenon, we have developed a new computational analysis that is designed to estimate the shape of an observed surface from the optical projections of its contours in a 2D image. This model assumes that contours on a surface are generated by a series of parallel planar cuts (Tse, 2002), and it estimates the relative depth between any two surface points based on the number of contour planes with which they are separated, and the apparent 3D orientations of those planes. A psychophysical experiment was performed in an effort to compare the model predictions with the perceptual judgments of human observers. Stimuli consisted of sinusoidally corrugated surfaces with contours that were oriented in different directions. Horizontal and vertical scan lines in these images were marked by a row of nine equally spaced dots. An identical row of dots was presented against a blank background on a separate monitor, each of which could be moved perpendicularly with a handheld mouse. Observers were instructed to adjust the dots on the second monitor in order to match the apparent surface profile in depth along the designated scan line. The results revealed that observers' shape judgments are typically compressed and/or sheared relative to the ground truth, and these distortions can be simulated with our model by computing relative depths using an incorrect estimate of the 3D orientations of the contour planes. The results cannot be fit, however, using models that assume contours are lines of curvature (Stevens, 1981) or surface geodesics (Knill, 2001).
Supported by NSF grants BCS-0546107 and BCS-0962119.