Purchase this article with an account.
Jason A. Droll, Binh T. Pham, Craig K. Abbey, Miguel P. Eckstein; Learning predictive cues to optimize visual search. Journal of Vision 2006;6(6):834. https://doi.org/10.1167/6.6.834.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Observers performing visual search tasks that require both detection and localization of a target often show improved performance as the experiment progresses, especially when the target is accompanied by a predictive cue. Is this increase in performance due to a general improvement in signal encoding, or a re-weighting of visual information with respect to each cue? Twelve subjects performed 300 trials in which they searched for a bright target among dimmer distractors with contrast noise (mean 110 and 60, stdev. 20), displayed for 2s. Each of the six stimuli was surrounded by a colored circle. Subjects were told that some colors were more likely to contain the target, although the distribution of this likelihood was not specified. Subjects reported the presence and location of the target. Performance in the perceptual task improved by the final quarter of trials and accompanied a change in gaze strategy. By the final quarter of trials, first saccades most frequently targeted predictive cues (24% vs. 13%), and had longer fixation duration (348ms vs 243ms), even on trials when no target was in fact present. False alarm trials for perceptual decisions and first saccades also suggested that localization was most affected by the noise in valid cues. However, this learning was suboptimal when compared to an ideal Bayesian learner model exposed to similar cue statistics. We conclude that visual-saccadic and perceptual decisions during search may be influenced by learned statistics of cue validity, allowing observers to more optimally weight sensory evidence when seeking relevant information.
This PDF is available to Subscribers Only