October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Visual statistical learning distorts feature memory
Author Affiliations & Notes
  • Brynn E. Sherman
    Yale University
  • Nicholas B. Turk-Browne
    Yale University
  • Footnotes
    Acknowledgements  NSF GRFP; NIH R01 MH069456; CIFAR
Journal of Vision October 2020, Vol.20, 174. doi:https://doi.org/10.1167/jov.20.11.174
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brynn E. Sherman, Nicholas B. Turk-Browne; Visual statistical learning distorts feature memory. Journal of Vision 2020;20(11):174. https://doi.org/10.1167/jov.20.11.174.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual experience contains a mix of predictable regularities (e.g., the layout of offices at work and the colleagues you interact with there) and idiosyncratic features layered unpredictably on top of this structure (e.g., the weather outside and the clothes people wear). How we represent these two aspects of experience has typically been investigated separately, with statistical learning tasks used to study how we infer structure from regularities, and visual short- and long-term memory tasks treating each item de novo, isolated from stimulus history and surrounding spatiotemporal context. Here we ask how these two kinds of experience interact. We investigated how the presence of temporal regularities between objects influences visual short-term memory for idiosyncratic features of these objects. Participants were first exposed to a sequence of black shapes. Half of the shapes were temporally paired, with one shape (A) always followed by another shape (B). The other half of shapes (X) were unpaired and could be preceded or followed by many possible shapes. After being familiarized with this structure, participants completed a visual short-term memory task. On each trial, they were shown a rapid series of four shapes, each in a unique and random color. Participants were then probed with one of the shapes and asked to reproduce its color from that trial. Each set of four shapes on a trial contained one A/B pair and two unpaired X items, allowing us to assess the influence of temporal predictability on color memory. Using mixture modeling, we found that color memory was more precise for X shapes than for both A and B shapes. Preliminary analyses suggested that color reports for A and B shapes may be biased away from each other in color space. Together, these data suggest that learned regularities may interfere with encoding of idiosyncratic details into visual memory.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×