December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Overlapping neural representations for selection in attention and working memory
Author Affiliations
  • Ying Zhou
    New York University Abu Dhabi, Abu Dhabi, UAE
    Department of Psychology, New York University, New York, NY 10003
  • Clayton E Curtis
    Department of Psychology, New York University, New York, NY 10003
    Center for Neural Science, New York University, New York, NY 10003
  • Kartik Sreenivasan
    New York University Abu Dhabi, Abu Dhabi, UAE
    Department of Psychology, New York University, New York, NY 10003
  • Daryl Fougnie
    New York University Abu Dhabi, Abu Dhabi, UAE
    Department of Psychology, New York University, New York, NY 10003
Journal of Vision December 2022, Vol.22, 3776. doi:https://doi.org/10.1167/jov.22.14.3776
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ying Zhou, Clayton E Curtis, Kartik Sreenivasan, Daryl Fougnie; Overlapping neural representations for selection in attention and working memory. Journal of Vision 2022;22(14):3776. https://doi.org/10.1167/jov.22.14.3776.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although many studies point to qualitative similarities between working memory (WM) and attention, the degree to which these two constructs rely on common neural substrates remains unclear. In this study, we used fMRI to compare the patterns of neural activity evoked during attentional selection and selection within WM. Eleven participants selected one of three visual objects either while that information was present on the screen (i.e., attentional selection) or after it disappeared (i.e., WM selection ). Our measure of selection was how well classifiers trained on the pattern of BOLD activation could decode the location of the selected object. Critically, the classifiers were trained and tested either in the same condition (e.g., trained and tested on attentional selection; within-condition decoding) or different conditions (e.g., trained on attentional selection and tested on WM selection; across-condition decoding). Comparing the within- and across- condition performance provides quantitative evidence for the amount of overlap in neural substrates between the selection process in WM and attention. We observed significantly above-chance decoding performance both within- and across-conditions in visual cortex and intraparietal sulcus. The time-series of decoding performance within- and across-conditions was consistent with the time-course of the selection process, confirming the classifiers decoded the selection process itself. Importantly, the above-chance within- and across- condition performance was largely comparable in these areas—the ratio of across-condition to within-condition performance is larger than 0.91. These results point to a strikingly high overlap in the neural representations between attentional and WM selection.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×