Abstract
Observers can efficiently pick out early visual features to make sense of a dynamic environment. However, they may also rely on top-down information to facilitate scene processing. For example, when observers search for friends on a street, they may use prior information about their friends (e.g., how they move) to facilitate search. To test for top-down facilitation in scene processing, we combined a category search task we used previously with modelling. There is accumulating evidence that observers are highly efficient at processing biological targets such as humans. When searching for biological targets, observers may be able to use both early and category-specific visual features, and only early visual features for non-biological targets. On separate blocks observers searched for grayscale videos containing humans or machines while their eye movements were tracked. In Experiment 1, the distractors were videos from the other category. In Experiment 2, natural-scene videos (e.g., waterfalls and trees) were used as distractors for both targets. We found a category advantage in both experiments: Observers detected humans more quickly than machines. Importantly in Experiment 2, observers detected the absence of humans more quickly than the absence of machines on target-absent trials although the search arrays were identical for both target categories. Thus, category-specific visual features in memory may help observers efficiently eliminate distractors. Consistent with search times, we also found that observers fixated more briefly on human targets than on machine targets. There was no category advantage when contrasting two biological categories (humans and animals). To further rule out the contribution of early visual features to this category advantage, we tested our paradigm on Itti and Koch's bottom-up visual saliency model. We found that this model did not reproduce our results. In sum, our results point to possible top-down information that facilitates observers' sensitivity to biological categories.