Abstract
In a world that is rich with visual information, it is important to focus attention in order to prioritize what is behaviorally relevant. This can be accomplished in part by learning consistent associations. As demonstrated by the contextual cueing paradigm (Chun & Jiang, 1998), visual context can be learned and used to assist localization of individual objects in a display. Here we investigate whether information acquired from scene photographs can facilitate visual search. Observers performed a search through letters presented on top of various real-world scene images. In the predictive condition, the background images were consistently paired with target locations. These associations were learned and targets were localized faster for predictive scenes compared to when background scenes were not predictive of the target positions. To examine the specificity of the learning, we mirror-reversed the scene photographs in the final phase of the experiment. Contextual cueing was abolished for the mirror-reversed scenes. These results indicate that background scene photographs can be learned to guide attention, and that the learning is rather specific.
Supported in part by NIH EY014193