December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Identifying the format of neural codes for orientation WM by predictive modeling of fMRI activation patterns
Author Affiliations & Notes
  • Kelvin Vu-Cheung
    University of California, Santa Barbara
  • Thomas Sprague
    University of California, Santa Barbara
  • Footnotes
    Acknowledgements  Research was sponsored by a UC Santa Barbara Academic Senate Faculty Research Grant, an Alfred P Sloan Research Fellowship, and the U.S. Army Research Office and accomplished under cooperative agreement W911NF-19-2-0026 for the Institute for Collaborative Biotechnologies
Journal of Vision December 2022, Vol.22, 4479. doi:https://doi.org/10.1167/jov.22.14.4479
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kelvin Vu-Cheung, Thomas Sprague; Identifying the format of neural codes for orientation WM by predictive modeling of fMRI activation patterns. Journal of Vision 2022;22(14):4479. https://doi.org/10.1167/jov.22.14.4479.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Activation patterns measured from primary visual cortex can be used to decode stimulus values held in working memory (WM; Serences et al, 2009; Harrison & Tong, 2009), and it has been theorized that this is possible because neurons responsible for representing perceived visual features are recruited for representing those features during WM (Serences, 2016). However, when performing an orientation WM task, participants might strategically recode the remembered grating orientation using a spatial code (e.g., attending to a location on the screen and/or imagining a line), which may still lead to successful orientation decoding. Here, we tested whether participants use a spatial code during an orientation WM task by building a forward model based on spatial voxel receptive field (vRF) models. Participants (n = 5) maintained the precise orientation of a grating (0.5 s stimulus duration, followed by a 1s filtered noise mask), then reported that orientation after a 12 second delay. We identified each voxel’s spatial selectivity during a vRF mapping session. First, we employed an inverted encoding model to successfully decode orientation representations in early visual cortex during the WM task. Then, to test the spatial recoding hypothesis, on each trial we sorted voxels into ‘parallel’ and ‘orthogonal’ groups based on their spatial selectivity relative to the remembered orientation and compared their mean activation levels during the delay. If orientation information in working memory is converted into spatial information, voxels with vRF positions aligned parallel with the remembered orientation should show higher activation than voxels aligned to the orthogonal orientation. Consistent with the spatial recoding hypothesis, the parallel voxels exhibited greater delay-period activation than orthogonal voxels in early visual cortex. These results are aligned with previous reports that participants store information in the code most meaningful for upcoming behavior (Lee et al, 2013; Henderson et al, pp2021).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×