Purchase this article with an account.
Carly J. Leonard, Howard Egeth; Feature-based guidance improves singleton detection during the attentional blink. Journal of Vision 2009;9(8):152. doi: https://doi.org/10.1167/9.8.152.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When two targets appear in a stream of rapidly presented distractors, report of the second (T2) is often impaired when it occurs soon after the first (T1). This phenomenon, known as the attentional blink, has often been explained as a failure of visual input to be encoded into a durable representation that can withstand masking by trailing objects. Joseph, Chun, and Nakayama (1997) showed that even the efficient task of singleton detection was massively impaired when presented soon after T1 in an attentional blink paradigm. This finding and others have led to the view that resources necessary for consolidation of T2 are not available while T1 processing is still engaged (i.e., Chun & Potter, 1995). Here we show that processing of T2 stimuli during the attentional blink does not necessarily occur as predicted by such a feed-forward, two-stage model. We find that the availability of feature-based attentional guidance reduces the magnitude of the attentional blink when T2 is a singleton detection task that is presented at short lags after a T1 letter identification task. This result was obtained when participants searched for a known color singleton, as well as for some types of known orientation singletons. Benefits in T2 performance were not due to reductions in T1 performance nor a change in response bias. The presence of these benefits indicates that feature biasing can be maintained even during the detection of T1. Importantly, top-down modulation increases the probability that well-specified visual representations will survive the attentional blink, gaining access to awareness and influence over behavior.
This PDF is available to Subscribers Only