Abstract
The human visual system is remarkable for its ability to segment objects in natural scenes. To study the problem of leaf segmentation and related natural tasks, we created a database of hand-segmented leaves in foliage-rich images calibrated to the human L, M, and S cones. Here we consider a simple patch classification task where the goal is to determine whether a pair of image patches is contained within a leaf's surface or whether the pair lies across a leaf's boundary. First, we measured the probability distributions of color differences and color-contrast differences between patches randomly selected from the database. These distributions are neatly described using a whitened three-dimensional (l,a,β) color space, where l≈logL+logM+logS, a≈logL+logM−2logS, and β≈logL−logM (e.g., Ruderman et al., 1998, JOSA-A 15:8). Next, we derived a near-optimal classifier based on these distributions. We find that the classifier's accuracy is largely determined by the color differences between patches and not by the color-contrast differences. Based on the color differences alone, the classifier performs at 79% correct for nearby patches and falls to 67% at greater distances (chance = 50%). Finally, we measured human performance in the patch classification task without, and then with, feedback. Without feedback, the human subjects paralleled (but were slightly below) the performance of the near-optimal classifier when they categorized unaltered natural image patches (“full”), uniform image patches (“texture removed”), and image patches where the average color differences were removed but texture remained (“color removed”). With feedback, substantial performance improvements were observed for the color removed conditions (especially when patches were close together), but not for the others. A subjective examination of trials where the near-optimal classifier disagreed with humans suggests that humans may use the following texture cues: good/bad continuation of a shadow or surface marking, shading gradients, and fine texture similarity.
Supported by NIH grants EY11747 and EY02688.