Abstract
A central task of vision is to segment the retinal image into discrete objects, and to keep track of them as the same persisting individuals over time and motion. Such processing is often discussed in terms of object files — midlevel visual representations that ‘stick’ to moving objects on the basis of spatiotemporal properties, and store (and update) information about those properties. Object files have traditionally been studied via ‘object-specific preview benefits’ (OSPBs): discriminations of an object's features are speeded when an earlier preview of those features occurs on the same object, as opposed to on a different object, beyond general display-wide priming. This effect is clearly ‘object-based’ (vs. space-based), but what counts as an ‘object’ in this framework? Here we studied this question via much more extreme manipulations than previous work, by removing all static segmentation cues. In Experiment 1, both the objects and the background were composed of random visual noise — so that the objects were defined only via their motion. Experiment 2 went even further, removing all segmentation cues: the entire random-noise background simply rotated as a whole. Robust OSPBs were nevertheless found in both cases. We conclude that the construction and maintenance of object files does not require static surface cues to ‘objecthood’, nor any segmentation cues at all. In addition, since objects were always invisible until the motion began — after the offset of the previewed features — we conclude that object files can be established ‘after the fact’, postdictively. These results clearly conflict with the assumption that object-files require previously segmented objects, but they do maintain the two key aspects of the object-file framework — individuation and tracking. Overall, these experiments help characterize what “object files” really are, and how they do and do not relate to our common-sense notions of objects.