August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Memory augmentation with adaptive cognitive interfaces
Author Affiliations & Notes
  • Julia Pruin
    University of Chicago
  • Wilma Bainbridge
    University of Chicago
  • Monica Rosenberg
    University of Chicago
  • Megan deBettencourt
    University of Chicago
  • Footnotes
    Acknowledgements  National Science Foundation BCS-2043740; K99 MH128893
Journal of Vision August 2023, Vol.23, 5919. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julia Pruin, Wilma Bainbridge, Monica Rosenberg, Megan deBettencourt; Memory augmentation with adaptive cognitive interfaces. Journal of Vision 2023;23(9):5919.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Memory is better for memorable images and during attentive states. Previous work shows that memorability and attentional state each explain significant variance in memory. In this study, we investigate whether we can precisely tailor what images are shown during which attentional states. Can we leverage image memorability to rescue memory during an attentional lapse? Or conversely, can an attentive state rescue memory for an otherwise forgettable image? In this study, our goal was to leverage both of these factors in real time to modulate human long-term memory. First, we selected scene images from the SUN database that reliably differed in memorability, i.e., corrected recognition or CR (CRhigh=83.42; CRlow=46.51; p<0.001) and ensured that they were balanced for lower-level visual features (RGB and spatial frequency, ps>0.5). Then, we used a continuous performance task in which participants categorized a series of rapidly presented scene images (500 scenes, 1 s stimulus presentation) as either indoor or outdoor. We built upon prior work that monitored attention dynamics in real time via behavioral fluctuations of response time (RT) and triggered specific images in high (slow RT) or low (fast RT) attentional states. Afterwards, participants completed a surprise recognition memory test. Participants successfully performed both the attention and memory phases (A′s>.5, ps<0.001). Furthermore, we successfully detected attentive and inattentive moments in real time, which were individually tailored based on each participant’s RTs. We then manipulated which specific images were paired with which attentional states. For half of the participants, extremely memorable information was presented during extremely attentive moments, and vice versa for the other half of participants. In sum, we created an adaptive cognitive interface in which we jointly controlled stimulus presentation contingent to attention dynamics, to create personalized encoding experiences.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.