Abstract
Memory is better for memorable images and during attentive states. Previous work shows that memorability and attentional state each explain significant variance in memory. In this study, we investigate whether we can precisely tailor what images are shown during which attentional states. Can we leverage image memorability to rescue memory during an attentional lapse? Or conversely, can an attentive state rescue memory for an otherwise forgettable image? In this study, our goal was to leverage both of these factors in real time to modulate human long-term memory. First, we selected scene images from the SUN database that reliably differed in memorability, i.e., corrected recognition or CR (CRhigh=83.42; CRlow=46.51; p<0.001) and ensured that they were balanced for lower-level visual features (RGB and spatial frequency, ps>0.5). Then, we used a continuous performance task in which participants categorized a series of rapidly presented scene images (500 scenes, 1 s stimulus presentation) as either indoor or outdoor. We built upon prior work that monitored attention dynamics in real time via behavioral fluctuations of response time (RT) and triggered specific images in high (slow RT) or low (fast RT) attentional states. Afterwards, participants completed a surprise recognition memory test. Participants successfully performed both the attention and memory phases (A′s>.5, ps<0.001). Furthermore, we successfully detected attentive and inattentive moments in real time, which were individually tailored based on each participant’s RTs. We then manipulated which specific images were paired with which attentional states. For half of the participants, extremely memorable information was presented during extremely attentive moments, and vice versa for the other half of participants. In sum, we created an adaptive cognitive interface in which we jointly controlled stimulus presentation contingent to attention dynamics, to create personalized encoding experiences.