September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Chunking in Visual Working Memory changes the Guidance of Attention in a Visual Search
Author Affiliations
  • Logan Doyle
    University of Toronto
  • Susanne Ferber
    University of Minnesota
Journal of Vision September 2024, Vol.24, 678. doi:https://doi.org/10.1167/jov.24.10.678
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Logan Doyle, Susanne Ferber; Chunking in Visual Working Memory changes the Guidance of Attention in a Visual Search. Journal of Vision 2024;24(10):678. https://doi.org/10.1167/jov.24.10.678.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Pairing two colors regularly across displays enables them to be accessed by visual working memory (VWM) more efficiently in a process often called chunking (Brady et al., 2009). Previous investigations have revealed that this VWM advantage may be due to the contribution of explicit LTM (Huang & Awh, 2018; Ngiam et al., 2019). Simultaneously, colors actively maintained in VWM guide the deployment of attention (van Moorselar, 2014) separately from representations maintained in LTM (Carlisle et al., 2011). We asked if the representational changes that underlie chunking also impact how those VWM representations guide attention. Using an incidental capture design, we presented participants with four reliable (high-probability) color pairs across 10 blocks of VWM tests or visual search trials. Attentional guidance of the maintained colors was calculated as the difference in reaction time as a function of the distance between the search target and the maintained color pair. Across three experiments, we found that participants with full explicit awareness of the color pairings were significantly more accurate than unaware participants in the VWM task (F (1, 266) = 22.93, p < 0.001), replicating Ngiam et al. (2019). Surprisingly, aware participants were significantly less guided toward those high-probability pairs in the search task when compared to unaware participants (F (1, 36) = 4.77, p < 0.05). This slowing is attributable to chunking, as it is limited to aware participants maintaining high-probability pairs. There is no difference between aware and unaware participants for low-probability pairs (F (1, 32) = 0.187, p = 0.67) or search displays with single colors (F (1, 42) = 1.207, p = 0.28). Overall, participants who leveraged chunked representations to improve in the VWM task show slowed attentional capture of maintained colors during the visual search task.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×