Abstract
The allocation of visual attention can be guided by a target template (Leonard & Egeth, 2008), and this template can be stored in visual working memory (VWM; Carlisle et al., 2011). Although resolution has become an important issue in the VWM literature (e.g., Zhang & Luck, 2008), little work has examined how the resolution of VWM is translated into the control of visually-guided saccadic behavior. To examine the resolution of attentional guidance, we recorded eye movements while participants performed a search task for a target with a small gap on either the top or bottom (other objects had gaps on the left or right). Each trial started with a precue indicating, with 100% validity, the precise color of the upcoming target. A search display then appeared consisting of 6 objects that were 180º away in color space (far-color objects), 3 target-color objects (one of which was the target) that exactly matched the precue, and 3 near-color objects. We varied the distance in color space between the near-color and the target-color (either 16º, 24º, 32º, or 40º) to measure the resolution of the search template. That is, we measured the probability that an object would be fixated as a function of its distance (in color space) from the precued target color. Manual reaction time increased as the near-color became more similar to the target-color, which was accompanied by an increase in the number of fixations on near-color objects. We also included a pure VWM task using similar displays, in which participants indicated which of two objects matched the previously presented color patch. Comparisons of the search task and the pure VWM task suggested that the template that guides search is substantially less precise than the underlying VWM representation.
Meeting abstract presented at VSS 2012