June 2007
Volume 7, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Visual memory or visual features coded verbally? An effect of working memory load on guidance during visual search
Author Affiliations
  • Hyejin Yang
    Department of Psychology, Stony Brook University, New York
  • Gregory Zelinsky
    Department of Psychology, Stony Brook University, New York
Journal of Vision June 2007, Vol.7, 686. doi:https://doi.org/10.1167/7.9.686
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hyejin Yang, Gregory Zelinsky; Visual memory or visual features coded verbally? An effect of working memory load on guidance during visual search. Journal of Vision 2007;7(9):686. https://doi.org/10.1167/7.9.686.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Does holding multiple targets in working memory (WM) affect search guidance, and what WM representation underlies guidance in a search task? We addressed these questions by combining a working memory task and a search task. Subjects viewed 1, 2, or 4 objects in a target preview display, either with or without articulatory suppression (AS) during the preview period. Their task was to indicate whether any one of these objects appeared in the search display, which depicted either 0 (target absent) or 1 target and 8 or 9 random object distractors. Catch trials in which a distractor was replaced with a non-target exemplar from a target category (e.g., a bass was shown in the preview but a trout appeared in the search display) were used to encourage the encoding of visual information. Guidance was defined by the proportion of initial saccades directed to the target and by the proportion of first-object-fixations corresponding to the target. If guidance results from matching targets in visual WM, articulatory suppression should have no effect. However, if target visual features are coded verbally, articulatory suppression during the target preview should eliminate search guidance. We found a pronounced effect of WM load. As the number of potential targets increased from 1 to 4, guidance dropped to near chance and RTs almost doubled. Even asking observers to search for 2 targets, rather than 1, profoundly impaired search guidance. Moreover, this effect of WM load interacted with AS; search was guided less efficiently with AS, although above-chance guidance was still observed in single-target search. We conclude that target features are coded both visually and verbally in WM, and that both representations are used to guide search to targets.

Yang, H. Zelinsky, G. (2007). Visual memory or visual features coded verbally? An effect of working memory load on guidance during visual search [Abstract]. Journal of Vision, 7(9):686, 686a, http://journalofvision.org/7/9/686/, doi:10.1167/7.9.686. [CrossRef]
Footnotes
 This work was supported by NIH grant R01-MH63748.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×