September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Enhanced perceptual processing of visual context benefits later memory
Author Affiliations
  • Megan deBettencourt
    Institute for Mind and Biology, University of Chicago
    Department of Psychology, University of Chicago
  • Nicholas Turk-Browne
    Princeton Neuroscience Institute, Princeton University
    Department of Psychology, Princeton University
  • Kenneth Norman
    Princeton Neuroscience Institute, Princeton University
    Department of Psychology, Princeton University
Journal of Vision August 2017, Vol.17, 95. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Megan deBettencourt, Nicholas Turk-Browne, Kenneth Norman; Enhanced perceptual processing of visual context benefits later memory. Journal of Vision 2017;17(10):95. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Fluctuations in attention affect task performance in the moment, but can also have long-lasting consequences by influencing memory formation. These effects are typically studied by manipulating whether to-be-remembered objects or words are selectively attended during encoding. However, a key determinant of memory is the temporal context in which stimuli are embedded, not just the individual stimuli themselves. Here we examine how attention to temporal context impacts subsequent memory. Participants in two fMRI experiments completed multiple runs of a memory-encoding task. In each run, they studied lists of sequentially presented words for a later test. Between words, participants were rapidly presented with a series of photographs from a single visual category, either faces or scenes. These photographs served as the temporal context, and were not themselves tested for memory. At the end of the run, participants were asked to recall as many words as possible from one of the lists. We trained a multivariate pattern classifier to decode the two possible contexts (face versus scene) from an independent localizer task with no words. Applying this classifier to the memory-encoding runs allowed us to measure the perceptual processing of the temporal context for a given list. As a manipulation check, we were able to decode the visual category of the interleaved context photographs when collapsing across lists. Critically, list-wise variance in this decoding related to list-wise variance in the number of words later recalled. Moreover, within lists, there was more classifier evidence for the category of the context surrounding, and even preceding, words that were later remembered versus forgotten. Altogether, these findings suggest that enhanced contextual processing may be one mechanism through which attention can boost memory formation.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.