September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Beyond 'Gist': The Dynamic Interplay of Conceptual Information and Visual Detail in Long-Term Memory
Author Affiliations
  • Nurit Gronau
    The Open University of Israel
  • Roy Shoval
    The Open University of Israel
  • Rotem Avital-Cohen
    The Open University of Israel
Journal of Vision September 2024, Vol.24, 652. doi:https://doi.org/10.1167/jov.24.10.652
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nurit Gronau, Roy Shoval, Rotem Avital-Cohen; Beyond 'Gist': The Dynamic Interplay of Conceptual Information and Visual Detail in Long-Term Memory. Journal of Vision 2024;24(10):652. https://doi.org/10.1167/jov.24.10.652.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Conceptual information plays an important role in visual LTM, however, the precise nature of such semantic-visual interactions is yet unclear. Here, we tested the effects of object meaning on memory for an arbitrary visual property, specifically, item location. Unlike studies examining object-location binding in visual WM, LTM's longer timescale might involve unique processes that are above and beyond those involved in VWM. According to 'Resource-limited' accounts, highly familiar items demand fewer encoding resources, allowing spare capacity for visual detail encoding (Popov & Reder, 2020). 'Schema-based' accounts, in contrast, suggest that conceptual knowledge may prioritize gist-based representations, at the expense of a visual representation (e.g., Bellana et al., 2021; Koutstaal et al., 2003). Namely, semantic information may hinder item-specific memory, particularly over long time lags. To test these opposing theories, participants encoded individual objects at arbitrary screen locations and were subsequently tested on their memory for these locations using a 4-AFC recognition test, encompassing both old/new items and old/new locations. As expected, overall memory was higher for meaningful (real-world) than for meaningless (scrambled) objects. Critically, given correct item identification, the relative correct location memory rates were significantly higher for the meaningful objects. A follow-up study employed only real-world objects that were independently rated for their ‘meaningfulness' and ‘visual complexity’ factors. Once again, object meaning was positively associated with location accuracy, providing a more fine-tuned measure of conceptual influence on visual memory. Finally, using objects with color-meaningful (e.g., red wine) versus color-meaningless (red balloon) features, we found that in contrast to feature-independent theories (Utochkin & Brady, 2020), location memory was more heavily reliant on color memory when the latter was meaningful. Collectively, our findings align with resource-limited theories, suggesting that meaningful stimuli or features allow an enhanced LTM for arbitrary visual details. Follow-up studies will test semantic-visual memory dynamics over longer time-lags.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×