Abstract
It is an ecological fact that parts of objects tend to be convex. Hoffman and Richards (1984) used this fact to motivate the breaking of an object boundary according to the minima rule. Since then, more rules have been described to account for human performance, however we still lack a quantitative framework in which to evaluate them. The goal of this study is two-fold. First, we propose a simple convexity cue and segmentation algorithm for parsing objects into parts. Second, we propose a general quantitative framework for evaluating object segmentation algorithms and use it to measure the performance of our convexity cue.
METHODS: Dataset: Object silhouettes were derived from the Snodgrass and Vanderwart (1980) dataset of common objects. Ground-truth segmentations have been collected for 200 subjects (De Winter & Wagemans, VSS 2001).
Convexity Cue: We propose “intervening contour” as a cue for object segmentation. Two points within a shape are connected if no boundary crosses the straight-line path between them, and are disconnected otherwise. This cue implicitly enforces a convexity constraint on the parts.
Segmentation Algorithm: Once a matrix of connections between points is constructed, the best parts can be found by minimizing the normalized cut criterion (Shi & Malik, 2000).
Evaluation: Our segmentations are evaluated against human segmentations within the precision-recall framework. Precision and recall are combined into a weighted harmonic mean called the F-measure. F equals 1 for identical segmentations and decreases as the discrepancy between segmentations increases.
RESULTS: The F values for our algorithm segmentations fall below, but within the error range, of human performance. We conclude that convexity is a very strong cue for parsing object silhouettes into parts and that segmentation schemes can be effectively evaluated using the benchmark data and quantitative measures we have described.