Abstract
In “hybrid” visual and memory search, observers look for multiple, previously memorized target objects among distractors. Hybrid search is akin to many real-world searches, such as looking for items on your mental shopping list in the grocery store. Thus, hybrid searches occur in spatial and temporal contexts that we encounter repeatedly. In several experiments, we investigated if observers would incidentally learn and utilize spatial and temporal associations in hybrid search. Specifically, we examined learning of four different types of regularities: 1) target item sequences (e.g., the banana always follows the yoghurt), 2) target location sequences (e.g., a target in the lower left corner always follows a target in the upper right corner), 3) target item-location associations (the banana is always in the upper right corner), and 4) target item-location sequences (the banana in the upper right corner always follows the yoghurt in the lower left corner). Learning would be reflected in a decrease in search times. Our results show only weak incidental learning for the temporal sequences of target items or target locations alone, even after many repetitions of the sequence. By contrast, learning of target item-location associations was fast and effectively reduced search times. Furthermore, the experiments show a reliable effect of temporal sequence learning for target item-location associations. These findings suggest that spatiotemporal learning in hybrid search is hierarchical and conditional: Only if spatial and non-spatial target features are bound do temporal associations bias attention, pointing the observer to the task-relevant features expected to occur next.