September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Visual statistical regularities aid visual working memory of objects in a task-dependent manner
Author Affiliations & Notes
  • Gregory L Wade
    University of Delaware
  • Timothy J Vickery
    University of Delaware
Journal of Vision September 2019, Vol.19, 202c. doi:https://doi.org/10.1167/19.10.202c
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gregory L Wade, Timothy J Vickery; Visual statistical regularities aid visual working memory of objects in a task-dependent manner. Journal of Vision 2019;19(10):202c. doi: https://doi.org/10.1167/19.10.202c.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

People adeptly learn both spatial and visual statistical contingencies (visual statistical learning, or VSL). VSL supports explicit recognition judgments, and enhances performance in various contexts. For example, Brady, Konkle, and Alvarez (2009) demonstrated that spatial statistical contingencies between color features of memory items supports greater visual working memory (VWM) capacity (k), suggesting that VSL supports memory compression. In the present study, we first asked whether these findings generalize from simple features to complex shape characters. Secondly, we asked whether pre-exposure to temporal contingencies would support such compression. In the first experiment, subjects completed a VWM task in which they viewed 8 objects, maintained these in memory, and then were probed to identify the shape that had been presented at a cued location (8-alternative forced choice). In half of trials, 8 objects always appeared in paired configurations (e.g., shape A always appeared next to shape B), while in the other half, 8 different objects appeared in randomized configurations. Consistent with Brady, Konkle, and Alvarez (2009), participants learned paired configurations over time, resulting in a higher capacity for paired vs. randomized configurations (p < .001). In our second experiment, prior to performing the memory task, participants were given a VSL familiarization task where one set of objects co-occurred in pairs temporally within the image stream, and the other set was randomized. In the memory task both sets of images were presented in consistent pairs. If VSL supports VWM compression, we expected that scenes composed of previously paired shapes would support higher VWM capacity. However, no difference was observed, suggesting that VSL supports VWM compression, but may be task-dependent. Future studies will examine whether this finding was due to failure to generalize across tasks, and/or failure to generalize from a temporal to a spatial contingency.

Acknowledgement: NSF OIA 1632849 and NSF BCS 1558535 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×