September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
The capacity of encoding into visual short-term memory
Author Affiliations
  • Irida Mance
    Psychology Department, Michigan State University
  • Mark Becker
    Psychology Department, Michigan State University
  • Taosheng Liu
    Psychology Department, Michigan State University
Journal of Vision September 2011, Vol.11, 1276. doi:https://doi.org/10.1167/11.11.1276
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Irida Mance, Mark Becker, Taosheng Liu; The capacity of encoding into visual short-term memory. Journal of Vision 2011;11(11):1276. https://doi.org/10.1167/11.11.1276.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Goal: Many everyday activities depend on our ability to construct, maintain, and compare representations in a constantly changing visual environment. This ability has been show to rely on a form of memory known as visual short-term memory (VSTM). Although considerable research has examined the capacity limits of short-term memory stores, few studies have addressed the initial formation of VSTM representations. Here we used a sequential-simultaneous task that allowed us to investigate limits in the process of initially encoding items into VSTM. Methods: Participants were shown colored objects (targets) presented briefly and followed by pattern masks. The target objects were shown either sequentially or simultaneously. A probe object followed the targets and participants decided whether it matched one of the targets in color (delayed match-to-sample test). In Experiment 1, we tested two targets on each trial and in Experiment 2 we varied the number of targets (either two, three, or four targets) across trials. In each experiment we measured the accuracy of participants' delayed match-to-sample performance. Results and conclusion: We consistently found equal performance for sequential and simultaneous presentations for two targets. Worse performance in the simultaneous than the sequential condition was observed for larger set sizes (three and four). These results indicate that encoding into VSTM is limited to two items, suggesting that only a subset of possible sensory representations are encoded concurrently. These results also suggest that one can selectively attend to more than one item at a time.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×