Abstract
The capture of attention by salient singletons during visual search appears to be tied to the variability of the context in which such singletons occur. In previous work, we showed that when participants perform two sessions of search, the singleton loses its ability to capture attention in the second session only when it occurs in a small set of possible search display configurations. However, the mechanism by which reduced contextual uncertainty may reduce capture remains unclear. Here, we asked whether this loss of capture is unique to the displays participants can become familiar with, indicative of contextual cueing effects, or generalises to new displays, suggesting that lowered contextual uncertainty instead facilitates general learning about predictive structure present across search configurations. To test this, we performed a large online study (n=200), in which participants performed a visual search task similar to our previous study. After two sessions in which participants were trained with a small set of search display configurations, we tested them in a third session on a mix of familiar and novel configurations, comparing the degree to which singletons captured early attention. We used letter recall in a probe-capture paradigm to index the locations which participants attended to within the search display. We replicated our previous finding that capture by the singleton disappeared after an initial training session. Notably, comparison of capture in the novel and familiar contexts in session 3 revealed that capture was eliminated in both conditions. As such, our results suggest that familiar configurations did not improve visual search performance by providing contextual clues about the location of the singleton, but that the development of generic predictions about task-relevant locations and features of the display may have reduced capture, which does not occur in settings with higher variability.