Abstract
Given the complexity of our visual environments, a number of mechanisms help us prioritize goal-consistent visual information. When crossing the street, for instance, attention is constrained to relevant information, such as the crossing signal and position of a nearby vehicle. This relevant information can then be encoded into visual working memory (VWM) and used to guide behavior. The efficient use of these inherently capacity-limited mechanisms relies on filters that restrict irrelevant information from visual selection and VWM encoding. What happens to these filters when attention is captured by a distractor? Although the effects of attentional capture on response times are heavily studied, we know little about its broader consequences: we may take longer to locate the crossing signal, but we may also fail to encode the nearby vehicle. Does distraction also disrupt the filter that controls VWM encoding such that irrelevant distractor features are unnecessarily encoded? Participants performed two consecutive visual search tasks on each trial. In the first (S1), they located a target (T) among non-targets (Ls), all presented within colored squares. On 40% of trials, a distracting white border flashed briefly surrounding a non-target square—we hypothesized that the (task-irrelevant) color associated with this S1 salient distractor would be encoded into memory, thus impacting later search. In the second search (S2), participants located a uniquely oriented landolt-C stimulus. The S2 items were all white, except one colored singleton distractor; critically, its color sometimes matched a color from the S1 display, including the salient distractor. We observed exacerbated response time slowing (consistent with memory-driven capture) in this critical S2 condition relative to when the singleton matched an S1 control item color. We propose a novel Filter Disruption Theory: distraction disrupts the filter that controls access to VWM, resulting in the encoding of irrelevant inputs at the time of capture.