December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Differential mechanisms of learning-related change
Author Affiliations & Notes
  • Youssef Ali
    Queen's University
  • Jeffrey Wammes
    Queen's University
  • Footnotes
    Acknowledgements  This work was supported by an NSERC Discovery Grant (JDW)
Journal of Vision December 2022, Vol.22, 4302. doi:https://doi.org/10.1167/jov.22.14.4302
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Youssef Ali, Jeffrey Wammes; Differential mechanisms of learning-related change. Journal of Vision 2022;22(14):4302. https://doi.org/10.1167/jov.22.14.4302.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The capacity of our visual memory is enormous. As a result, individual memories overlap with one another, but we are still able to distinguish between them effectively. This may be because the degree of overlap is dynamic, and changes with learning: Memories differentiate when initial overlap is moderate and integrate when initial overlap is high. There is considerable neural evidence for this pattern, consistent with the non-monotonic plasticity hypothesis (NMPH). However, it has been difficult to curate a stimulus set that captures the entire range of possible overlap, and to establish a behavioural task sensitive to these representational shifts. Here, we fill this gap in our understanding of neuroplasticity by using model-based stimulus synthesis to create pairs of abstract visual stimuli that sample the entire range of possible feature overlap, and using a novel task to evaluate shifts in visual memory. During encoding, we explored impacts of task demands by embedding pairs into different learning tasks. We generated image pairs, each with one of five prescribed similarity levels, that were determined by the correlation among feature vectors extracted from a pretrained convolutional neural network. The pairs were embedded into either an Episodic condition, where associations are explicitly learned, or a Statistical condition, where associations are implicitly learned via temporal contiguity. Shifts in overlap were measured using a four-alternative forced choice task, where participants were cued with an image, and all response options were altered versions of its pairmate which were either more or less similar to the cue. Selecting more similar responses suggests integration while less similar responses indicate differentiation. When episodically encoded, visually overlapping memories followed an NMPH-consistent pattern, indicating that behaviour can index representational shifts. Interestingly, integration may be attenuated at the highest similarity level, so follow-ups are underway to further characterize the shape of the learning function.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×