Abstract
In our daily lives, we only remember a fraction of what we see. Memory failures can arise from factors including attentional lapses and poor item memorability. However, most models of human memory disregard both an individual's attentional state and an image's memorability. In this study, we consider these image and individual-specific influences on memory simultaneously to build a model of visual memory. To this end, we analyzed data from two experiments (N=55) that used response time to index attentional state during a visual attention task (with trial-unique scene stimuli) and measured subsequent image recognition. We then collected data from participants (N=722) performing a continuous recognition task on Amazon Mechanical Turk to characterize the memorability of each of these 1100 scene images. Memorability was operationalized as the online participants' average memory performance as performance was highly consistent across individuals. We next used mixed-effects models to predict subsequent recognition memory in the two attention experiments. Specifically, we predicted recognition memory from each image's memorability score (which varied across images but was constant across individuals) or the attentional state at encoding (which varied across both images and individuals). These models revealed that both image memorability and individual attentional state explain significant variance in subsequent image memory. Furthermore, a joint model including both memorability and attentional state predicted subsequent memory better than models based on either factor alone and demonstrated that memorability and attention explain unique variance in subsequent memory. Thus, building models based on both individual and image-specific factors allows for directed forecasting of what we remember.