Abstract
At one instance there is just a bunch of objects in the refrigerator, in the next, you clearly discern the package of milk. How do we select visual information before perceptual organization? We addressed this question in two recognition experiments involving pictures of fragmented objects. In Experiment 1, participants preferred to look at the object rather than a control region 25 fixations prior to explicit recognition. Furthermore, participants inspected the object as if they had identified it around 9 fixations prior to explicit recognition. In Experiment 2, we investigated whether semantic knowledge might explain the systematic object inspection prior to explicit recognition. Consistently, more specific target knowledge made participants scan the fragmented stimulus more efficiently. For instance, the control region was rejected faster when participants knew the object's name. Both experiments showed that participants were looking at the objects as if they knew them before they became aware of their identity. The findings are consistent with a predictive account of object recognition, where eye movements are guided by an object hypothesis to regions providing high information gain.