August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Uncovering the dynamics of visual memory representations over time
Author Affiliations & Notes
  • Eden zohar
    Tel Aviv University
  • Dekel Abeles
    Tel Aviv University
  • Stas Kozak
    Tel Aviv University
  • Nitzan Censor
    Tel Aviv University
  • Footnotes
    Acknowledgements  This work was supported by the Israel Science Foundation (ISF, grant 526/17), the US-Israel Binational Science Foundation (BSF, grant 2016058), and the European Research Council (ERC-2019-COG 866093).
Journal of Vision August 2023, Vol.23, 4722. doi:https://doi.org/10.1167/jov.23.9.4722
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Eden zohar, Dekel Abeles, Stas Kozak, Nitzan Censor; Uncovering the dynamics of visual memory representations over time. Journal of Vision 2023;23(9):4722. https://doi.org/10.1167/jov.23.9.4722.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ability to accurately retrieve visual details of past events is a fundamental cognitive function relevant for daily life. While a visual stimulus contains an abundance of information, only some of it is later encoded into long term memory. However, an ongoing challenge has been to objectively define and isolate what representations of past visual experiences are maintained in memory over time. To address this question, we leveraged the hierarchal structure of convolutional neural networks (CNNs), and its correspondence to human visual processing. Participants were recruited from the Amazon Mechanical Turk platform and performed the task online. They first encoded a set of images and were then tested via a two alternative forced choice recognition memory test at different encoding-retention time intervals (immediate, 24 hours, 7 days). Importantly, to objectively isolate different levels of visual processing, distractors were selected based on their similarity to the target along specific network layers of the CNN (VGG-16) hierarchy. Accordingly, distractors were assigned based on sharing high similarity in early, intermediate, or late network layers with the target. Preliminary results suggest that high-level representations (corresponding to late network layers) are better retained over time, while lower-level representations (corresponding to early network layers) decay faster. This experimental approach and consequent findings provide novel insights into the dynamics of different levels of visual memory representations over time.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×