Purchase this article with an account.
Richard Hetley, Barbara Dosher, Zhong-Lin Lu; Attention and Uncertainty Limit Visual Search in Noisy Conditions. Journal of Vision 2010;10(7):227. doi: https://doi.org/10.1167/10.7.227.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Signal detection theory- (SDT; Green & Swets, 1966) based uncertainty models (Palmer, 1994; Eckstein, 1998) with an unlimited capacity attention system have provided an excellent account of the set size effects in visual search accuracy. However, spatial cuing task experiments have found strong effects of attention: precuing improves accuracy, especially when the target is embedded in a high level of external noise (Lu & Dosher, 1998; Dosher & Lu, 2000). In this research, we attempt to resolve the apparent contradictory conclusions from these two major lines of inquiry in spatial attention. We hypothesize that the conditions in which an effect of spatially-cued attention is substantial correspond to conditions in which attention effects over and above uncertainty occur in visual search. Our analysis suggests that many of the classical visual search experiments have been carried out using stimulus conditions where attention effects on perception are least likely to be found. We studied visual search in a range of external noise and contrast conditions for low and high template overlap (target-distractor similarity). We found that set size effects in high external noise conditions are larger than expected by decision uncertainty alone: log-log slopes increase sharply in increasing external noise levels, especially in high-precision judgments, showing improved external noise exclusion at smaller set sizes. Additional effects occur in low noise. All these results are well accounted by a visual model that uses the elaborated perceptual template model (ePTM; Jeon, Lu & Dosher, 2009), the attention mechanisms developed in the PTM framework (Lu & Dosher, 1998, Dosher & Lu, 2000), and the SDT-based uncertainty calculations. Our empirical results and theoretical model generate a common taxonomy of visual attention in spatial cuing and visual search.
This PDF is available to Subscribers Only