Abstract
Predictive visual context facilitates visual search in a paradigm known as contextual cueing (Chun & Jiang, 1998). In the original paradigm, search arrays were repeated across blocks such that the spatial configuration of all of the distractors in a display predicted an embedded target location on half of the trials. It was later shown that this benefit could occur even when only the context on the same half of the computer screen as the target was predictive (Olson & Chun, 2002). We successfully modeled these results using a connectionist architecture, and then used this model to predict the results of novel manipulations. The first novel manipulation was to test whether cueing would still occur from repetition of even more locally restricted contexts, defined as the configuration of two distractors in the same quadrant as the target. The model predicted significant contextual cueing, and a behavioral study with 12 subjects confirmed this prediction. Next, we examined whether such local contextual cueing would transfer to different quadrants of the screen. Thus, the target and its local configuration were moved to a different quadrant of the screen each time they were repeated. The model predicted that no cueing would result, and 12 subjects displayed no significant cueing in a behavioral experiment. Thus both the human subjects and our model were able to learn local contexts only when the target and the neighboring distractors remained in the same absolute location on the screen. These behavioral results and the model suggest that spatial contextual cueing of visual search can be based on position-dependent learning of local context.