Abstract
One of the most robust statistical properties of natural images is that contours are correlated across spatial frequency bands. However, the rules of perceptual grouping across spatial scales might be different as the observer approaches an object (adding HSF), or steps away from it (losing HSF). We manipulated contiguity across spatial scales by using hybrid images that combined the LSF and HSF of two different images. Some hybrids perceptually grouped well (e.g. two faces), and others did not (e.g. a highway and bedroom). In Experiment 1, observers performed a 2-AFC task while walking towards or away from the hybrids, judging how similar the hybrid was to each of its component images. In Experiment 2, conditions of an object moving towards or away were simulated by having images zooming in and out. Results in all experiments showed that when the observer and object are approaching each other, observers represent object SF content as predicted by their contrast sensitivity function: they add HSF to their representation at the appropriate rate. However, when observers or objects are receding from each other, observers show a perceptual hysteresis, hanging on to more of the high spatial frequency image than they can see (23% real vs. 50% perceived). This hysteresis effect is predicted by the strength of perceptual grouping between scale spaces. As we move through the world and attend to objects, we are constantly adding and losing information from different spatial scales. Our results suggest different mechanisms of on-line object representation: we tend to stick with our first grouping interpretation if we are losing information, and tend to constantly reinterpret the representation if we are gaining information.
Funded by an NSF Career award (IIS 0546262) and NSF grant (IIS 0705677).