August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Object Representations Guide Visual Short-Term Memory
Author Affiliations
  • Breana Carter
    George Washington University
  • Joseph Nah
    George Washington University
  • Sarah Shomstein
    George Washington University
Journal of Vision September 2016, Vol.16, 1073. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Breana Carter, Joseph Nah, Sarah Shomstein; Object Representations Guide Visual Short-Term Memory. Journal of Vision 2016;16(12):1073.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Successful cognitive functioning requires engagement of visual attention and visual short-term memory (VSTM). There is overwhelming evidence that visual attention and VSTM are highly interrelated, and engage similar regions within the control network (e.g., inferior-parietal sulcus). These similarities led us to hypothesize that both systems may operate under similar constraints. For example, attentional selection is known to be greatly influenced by object representations. Here, we ask whether VSTM is similarly constrained. In Experiment 1, a sample memory array of four differently colored squares briefly appeared on a three-rectangle arrangement. Two squares appeared on the central fixated rectangle (same-object) while the other two squares appeared on one of the two flanking rectangles (different-object). After a 1000ms retention period, a test array appeared and participants determined whether the new set was the same or different. A change occurred on half of the trials. Participants were split into two groups based on VSTM capacity. Object-based modulation of VSTM performance was observed in the high-capacity group, with higher accuracies for detecting a change on the same attended object as compared to when change occurred on the different object. VSTM performance for the low-capacity group was not modulated by object representations. In Experiment 2, we added a cue, highlighting the central object, with an expectation of increasing attentional allocation to the central object. We observed that the addition of a central cue resulted in object-based modulation of VSTM in both the low- and high-capacity groups. Our results suggest that while the attentional system is automatically constrained by object-based representations, the VSTM system shows more flexibility. Mainly, objects constrain VSTM automatically only in high-capacity individuals, suggesting that the use of object-based representations in VSTM could reflect a perceptual strategy. Interestingly, low-capacity individuals can benefit from object-based representations if those representations are made salient.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.