Abstract
Contours in natural images can arise from a variety of causes such as reflectance changes, shadows, occlusions, sharp edges or specularities. However, there are no existing computational models for determining 3D shape from image data that can successfully analyze all of these possible contour types. In order to avoid the application of these models to inappropriate image structures, it would be especially useful to identify contours at an early level of processing. The present research was designed to measure the ability of human observers to label the contours within small isolated patches of photographs or computer generated images, in an effort to determine the minimal amount of contextual information that is required for accurate performance. On each trial, a small region of a larger image was presented within a circular aperture, and observers were required to estimate the identity of a designated contour and provide a confidence rating. They were also asked to identify the depicted object if possible. The aperture sizes were gradually increased throughout the experiment, such that the contours in the smallest apertures were completely ambiguous, while those in the largest apertures could be identified with perfect accuracy. Stimuli were randomly mixed such that all contours in the smallest apertures were judged prior to increasing the aperture sizes. The results reveal that humans are capable of identifying contour types with relatively little contextual information, and this usually occurs when they are still unable to identify any specific objects in a scene. In most cases, correct contour identification occurs when the aperture is sufficiently large to reveal a single vertex, such as a Y, X or arrow.