September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Characterizing representational drift via a context-dependent visual working memory task
Author Affiliations
  • Yixin Yuan
    UC San Diego
  • Mikio Aoi
    UC San Diego
  • John Serences
    UC San Diego
Journal of Vision September 2024, Vol.24, 1386. doi:https://doi.org/10.1167/jov.24.10.1386
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yixin Yuan, Mikio Aoi, John Serences; Characterizing representational drift via a context-dependent visual working memory task. Journal of Vision 2024;24(10):1386. https://doi.org/10.1167/jov.24.10.1386.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent studies with longitudinal neural recordings in mice suggest that neural activity associated with repeated stimuli gradually deviates from its initial pattern over time, even when task performance stays constant. This phenomenon, termed ‘representational drift’, challenges the common assumption that after fully learning a task, neural responses stabilize to support robust representations of task-relevant stimuli. Here we sought to determine whether representational drift occurs in human subjects while they are engaged in a visual working memory task, and to determine if drift is context specific. We recruited 4 participants (3F, 1M) to undergo a 5-session long fMRI study spread across a span of 3-5 weeks. During each session, the participants performed a two-alternative forced choice task that required recalling whether a probe object matched the target object shown prior to a 3-10 second delay period. Importantly, we manipulated the relevant stimulus components so that there were two task contexts: on half of the trials, the participants were instructed to remember the location and color of the object, while on the other half they were instructed to remember the location and shape. The participants were behaviorally trained and the task difficulty level was staircased so that accuracy stabilized at ~75% prior to the first scan. We then ran ridge-regularized circular regression on activation patterns in functionally determined regions-of-interest to predict the spatial location of the target object across trials. On average, within-session decodability of the target object’s spatial location is higher than that of cross-session decoding. These results suggest that representations of spatial locations encoded in working memory drift from session-to-session, and this effect can be readily detected in the human cortex via fMRI.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×