Abstract
Pairing two colors regularly across displays enables them to be accessed by visual working memory (VWM) more efficiently in a process often called chunking (Brady et al., 2009). Previous investigations have revealed that this VWM advantage may be due to the contribution of explicit LTM (Huang & Awh, 2018; Ngiam et al., 2019). Simultaneously, colors actively maintained in VWM guide the deployment of attention (van Moorselar, 2014) separately from representations maintained in LTM (Carlisle et al., 2011). We asked if the representational changes that underlie chunking also impact how those VWM representations guide attention. Using an incidental capture design, we presented participants with four reliable (high-probability) color pairs across 10 blocks of VWM tests or visual search trials. Attentional guidance of the maintained colors was calculated as the difference in reaction time as a function of the distance between the search target and the maintained color pair. Across three experiments, we found that participants with full explicit awareness of the color pairings were significantly more accurate than unaware participants in the VWM task (F (1, 266) = 22.93, p < 0.001), replicating Ngiam et al. (2019). Surprisingly, aware participants were significantly less guided toward those high-probability pairs in the search task when compared to unaware participants (F (1, 36) = 4.77, p < 0.05). This slowing is attributable to chunking, as it is limited to aware participants maintaining high-probability pairs. There is no difference between aware and unaware participants for low-probability pairs (F (1, 32) = 0.187, p = 0.67) or search displays with single colors (F (1, 42) = 1.207, p = 0.28). Overall, participants who leveraged chunked representations to improve in the VWM task show slowed attentional capture of maintained colors during the visual search task.