Abstract
People are remarkably good at learning the statistics of their visual environments. For instance, people rapidly learn to pay attention to locations or simple features frequently associated with targets during visual search. Here, we tested how such statistical regularities influence attentional selection during real-world object search. Participants searched for a vertically oriented object among seven distractor objects tilted 45 degrees to the left or right. In a training phase, we induced attentional biases towards objects from one category (e.g., cars) by, unbeknownst to participants, making those objects more likely to be targets. In a subsequent testing phase, we examined whether learned biases persisted when each object became equally likely to be a target. In Experiment 1 (N=44), participants acquired attentional biases for a single real-world object that was frequently their target in the training phase (p < .00001, dz = 1.10), and this bias persisted into the neutral testing phase (p < .001, dz = 0.86). In Experiment 2 (N=32), we introduced new exemplars from the learned category during the testing phase to examine whether participants would generalize learning from one object to its entire object category. Results revealed no transfer to the new objects (p = .750, dz = 0.06, BF=0.19 in favor of no effect), despite robust learning of the exemplar object. However, as soon as participants learned to prioritize at least two exemplars from one category (Exp. 3, N=72), we found clear transfer during testing to novel objects from the same category (p < .00001, dz = 0.57). These results indicate that people can tune their attention adaptively, to specific objects or to entire object categories, based on recent experience. Together, these studies reveal that the breadth of attentional tuning in real-world search can be flexibly adjusted based on recent experience to optimally support current task demands.