Purchase this article with an account.
Timothy J. Vickery, Rachel S. Sussman, Yuhong Jiang; Selective attention and general attentional resources in the learning of spatial context. Journal of Vision 2006;6(6):844. doi: 10.1167/6.6.844.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Aim: When viewing a complex scene, our visual attention is guided not only by salient features, but also by past experience. Predictive visual context often facilitates target detection, showing spatial context learning. We explored whether spatial context learning is dependent on selective attention or general attentional resources. Method: Experiment 1 manipulated selective attention to backgrounds by combining low-pass filtered backgrounds and high-pass filtered search arrays, such that items could be attended independently of backgrounds. Throughout, backgrounds were consistently or inconsistently paired with target position. Experiments 2 and 3 manipulated general attention by adding a visual working memory load for colors or spatial locations to half of the trials in every training block, while target locations were consistently paired with distractor configurations. Results: Experiment 1 showed that no benefit occurs with low-pass filtered backgrounds and high-pass filtered search arrays. In experiments 2 and 3, learning was observed for the trained over untrained displays whether or not a secondary task was required. Discussion and conclusions: Prior research has shown that unattended background scenes cue target locations, but the scenes had features that could be selected incidentally during target search. When the targets can be selected independent of the correlated information, no learning is observed, demonstrating that selective attention is a prerequisite for statistical learning of spatial configuration. Further, this learning does not seem to depend on general attention, as equivalent learning was observed with or without VWM load.
This PDF is available to Subscribers Only