August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Semantic bias in visual working memory
Author Affiliations
  • Farahnaz Ahmed Wick
    University of Massachusetts Boston
  • Lucia Saura
    University of Massachusetts Boston
  • Chia-Chien Wu
    University of Massachusetts Boston
  • Marc Pomplun
    University of Massachusetts Boston
Journal of Vision August 2014, Vol.14, 852. doi:https://doi.org/10.1167/14.10.852
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Farahnaz Ahmed Wick, Lucia Saura, Chia-Chien Wu, Marc Pomplun; Semantic bias in visual working memory. Journal of Vision 2014;14(10):852. https://doi.org/10.1167/14.10.852.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This study investigated whether and how the semantic relationships among individual objects from a scene context are bound to visual short term memory (VSTM). Previous studies (Hwang, Wang & Pomplun, 2011) indicate that our strategies for memorizing objects in naturalistic scenes can be predicted by the semantic relationships between objects in that scene. That is, we tend to make saccades to objects that are most semantically related to the object in the current fixation. Why do such biases exist? One possibility is that consecutive inspection of semantically similar objects facilitates object memorization. We tested this hypothesis using a rapid serial presentation paradigm in which a series of eight object images were shown for 250 ms each. Each image in the series consisted of a single grayscale object against a white background. Subsequently, participants saw another image and indicated whether it had been in the series. In six experiments, we varied the object sets (either randomly chosen or taken from a specific context such as airport, park, or bedroom), the target objects for negative responses (objects from same or different contexts, or even of the same type as an object in the set), and their order of presentation (consecutive objects of high versus low semantic relationship). Recall rates were significantly better when objects were from the same context, and when they were ordered to maximize the semantic similarity of consecutive objects. Generally, these recall rates seemed to be governed by object types and semantics rather than by the specific visual features of individual objects. These results demonstrate that object representations are episodically organized in VSTM according to scene context.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×