August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Visual versus verbal attentional templates guiding visual search
Author Affiliations & Notes
  • Anna Grubert
    Durham University
  • Daisy McGonigal
    Durham University
  • Mikel Jimenez
    Durham University
  • Footnotes
    Acknowledgements  This work was supported by a research grant of the Leverhulme Trust (RPG-2020-319) awarded to AG.
Journal of Vision August 2023, Vol.23, 5387. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Grubert, Daisy McGonigal, Mikel Jimenez; Visual versus verbal attentional templates guiding visual search. Journal of Vision 2023;23(9):5387.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual search for known objects is controlled by visual target representations held in visual working memory. Can visual search also be guided by verbal target descriptions held in verbal working memory? And would such cross-modality guidance be equally efficient than guidance from visual target representations? To answer these questions, we measured N2pc components of the event-related potential in two blocked search tasks in which participants were either given visual or verbal target descriptions. Search efficiency was manipulated between trials in terms of memory load (activation of one versus two colour templates). Search displays in the two tasks were physically identical. They each contained six differently coloured vertically or horizontally oriented bars. Each search display was preceded by a cue display indicating the one or two target colour(s) relevant in the upcoming search display. In the visual task cues were coloured squares while they were the initial letters of the colour words in the verbal task (e.g., R for red). Participants task was to find the bar that matched (one of) the cued target colour(s) and report its orientation. N2pc components measured in the visual task were slightly delayed and attenuated in high- versus low-load trials. Load costs of that magnitude have been attributed to reflect mutual inhibition between two simultaneously activated colour templates and have been interpreted to reflect an efficient search mode. The same pattern of N2pcs was observed in the verbal task. However, the relative load costs in the verbal task were substantially increased both in terms of N2pc amplitudes and latencies as compared to the visual task. These results suggest that there are qualitative differences in visual search that is guided by visual as compared to verbal target representations in that verbal guidance is less efficient, and possibly even serial, rather than parallel.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.