Abstract
The detection of a curvilinear contour embedded in a field of random oriented visual elements—that is, the discovery of a set of elements that form a relatively coherent smooth group—has been extensively studied, but computational models of the process are still lacking. In a series of studies, subjects were asked to detect (two-interval forced choice) a curvilinear group of oriented elements (short line segments) that had been intermixed with a background set of 150 elements of random position and orientation. Unlike most similar studies, background elements were drawn freely from a uniform density over the display area. Inter-element distances along the contour were chosen to exactly match the distribution of inter-neighbor distances in the Delaunay triangulation of the background set, so that only orientation cues could be used to identify the virtual contour. A Bayesian model for detection of a contour in this situation was developed, based on a “subjective” prior that assigns probability to successive turning angles along the contour via a normal density centered on collinear. The key idea in the model is that detectability of a contour should depend on the likelihood ratio (Bayes factor) between the hypothesis that the elements in question were generated by a contour process and the “null hypothesis” that they were generated at random as part of the background set. This likelihood ratio (i) increases linearly with the number of elements in the contour, and (ii) decreases with the aggregated turning angle along the contour. Subjects' performance closely followed these predictions, with the detectability of each target contour depending on its Bayes factor. These results provide a very natural and detailed quantitative account of contour detection processes, and suggest that the visual system is very close to optimal in detecting non-random patterns in the environment.
Supported by NIH R01 EY15888.