Abstract
Purpose. Top down and bi-directional models of vision generally assume that higher level object models must be utilized to segment or recognize objects in complex scenes. This assumption leads to a paradox in the case where an observer is learning to recognize novel objects in realistic scenes. In the early stages of object learning, how is the observer to segment the novel objects so that they may be available as examples for recognition learning? Previously (Brady, 1998, IOVS) it was shown that observers are able to perform a kind of bootstrapped learning, wherein they learn to recognize objects from apparently unsegmentable examples. However, although the example scenes appeared to be unsegmentable, it remained to be shown to what extent the scenes were unsegmentable at the outset of training. In this experiment, we study the ability to segment during bootstrapped recognition learning. Methods. Novel objects were 3D, computer generated, organic looking objects. Each object was covered with a camouflage pattern consisting of images of other novel objects. Each observer was assigned six objects to learn and three to segment unlearned. Training and testing scenes consisted of the object of interest in the foreground and other camouflaged novel objects in the background. Object locations, background objects and camouflage were varied in every scene. Training consisted of sessions where the training scenes were presented along with an identifying sound effect. In recognition testing, observers were shown a unique scene and asked to name the object. In segmentation testing, observers were asked to trace the object of interest's outline. Results. Observers demonstrated significantly better segmentation ability with learned as opposed to unlearned objects. Conclusions.. The ability to segment objects in these stimuli is not immediate. Rather it evolves in parallel with the ability to recognize novel objects. Acknowledgments: This work supported by the following grants: NIH R01 EY 12691 and NIH EY0 2857