September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
The format of visual working memory representations
Author Affiliations
  • Yuna Kwak
    New York University, Department of Psychology
  • Masih Rahmati
    New York University, Department of Psychology
  • Clayton E. Curtis
    New York University, Department of Psychology
    New York University, Center for Neural Science
Journal of Vision September 2021, Vol.21, 2772. doi:https://doi.org/10.1167/jov.21.9.2772
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuna Kwak, Masih Rahmati, Clayton E. Curtis; The format of visual working memory representations. Journal of Vision 2021;21(9):2772. https://doi.org/10.1167/jov.21.9.2772.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Working memory (WM) enables us to maintain and manipulate information in the absence of an external stimulus. The contents of visual WM can be decoded from the spatial patterns of neural activity during memory delays across a widely distributed network of brain regions (Sreenivasan & D’Esposito, 2019). However, the nature of what is being represented by these patterns remains unclear and could even vary across the visual hierarchy. For instance, the direction of dot motion might be maintained by neurons in visual cortex with directional motion selectivity. Perhaps at the same time, abstract representations of motion direction (e.g., imagined vector) might be maintained by neurons in higher cortical areas, which lack motion direction tuning. To test this hypothesis, we examined whether features with a similar nature, such as orientation and motion direction, share a common neural representation during WM using cross-modality decoding. We measured fMRI activity in participants maintaining the orientation of a gabor or the motion direction of a random dot kinematogram over a 12s delay period of a delayed estimation WM task. Similar to previous studies, the contents of WM could be decoded from the patterns of activity in several retinotopically defined visual field maps. Critically however, we find that in some maps that a decoder trained on one type of target stimulus (e.g., gabor orientation) can successfully decode the other type of target stimulus (e.g., dot motion direction), indicating that the low-level visual features do not constitute the format of the WM representation. Moreover, cross-modality decoding accuracy was lower when training and testing on the stimulus presentation period, indicating that the representation shared across modalities is specific to WM. In conclusion, the common representational format across memory for orientation and motion direction suggests that high-dimensional perceptual information is condensed into low-dimensional representation for WM.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×