September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Building a comprehensive model of visual memory from images and individuals
Author Affiliations
  • Cheyenne D. Wakeland-Hart
    Univerity of Chicago
  • Megan T. deBettencourt
    Univerity of Chicago
  • Steven Cao
    Univerity of Chicago
  • Wilma A. Bainbridge
    Univerity of Chicago
  • Monica D. Rosenberg
    Univerity of Chicago
Journal of Vision September 2021, Vol.21, 2224. doi:https://doi.org/10.1167/jov.21.9.2224
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cheyenne D. Wakeland-Hart, Megan T. deBettencourt, Steven Cao, Wilma A. Bainbridge, Monica D. Rosenberg; Building a comprehensive model of visual memory from images and individuals. Journal of Vision 2021;21(9):2224. https://doi.org/10.1167/jov.21.9.2224.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In our daily lives, we only remember a fraction of what we see. Memory failures can arise from factors including attentional lapses and poor item memorability. However, most models of human memory disregard both an individual's attentional state and an image's memorability. In this study, we consider these image and individual-specific influences on memory simultaneously to build a model of visual memory. To this end, we analyzed data from two experiments (N=55) that used response time to index attentional state during a visual attention task (with trial-unique scene stimuli) and measured subsequent image recognition. We then collected data from participants (N=722) performing a continuous recognition task on Amazon Mechanical Turk to characterize the memorability of each of these 1100 scene images. Memorability was operationalized as the online participants' average memory performance as performance was highly consistent across individuals. We next used mixed-effects models to predict subsequent recognition memory in the two attention experiments. Specifically, we predicted recognition memory from each image's memorability score (which varied across images but was constant across individuals) or the attentional state at encoding (which varied across both images and individuals). These models revealed that both image memorability and individual attentional state explain significant variance in subsequent image memory. Furthermore, a joint model including both memorability and attentional state predicted subsequent memory better than models based on either factor alone and demonstrated that memorability and attention explain unique variance in subsequent memory. Thus, building models based on both individual and image-specific factors allows for directed forecasting of what we remember.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×