Abstract
Conceptual information plays an important role in visual LTM, however, the precise nature of such semantic-visual interactions is yet unclear. Here, we tested the effects of object meaning on memory for an arbitrary visual property, specifically, item location. Unlike studies examining object-location binding in visual WM, LTM's longer timescale might involve unique processes that are above and beyond those involved in VWM. According to 'Resource-limited' accounts, highly familiar items demand fewer encoding resources, allowing spare capacity for visual detail encoding (Popov & Reder, 2020). 'Schema-based' accounts, in contrast, suggest that conceptual knowledge may prioritize gist-based representations, at the expense of a visual representation (e.g., Bellana et al., 2021; Koutstaal et al., 2003). Namely, semantic information may hinder item-specific memory, particularly over long time lags. To test these opposing theories, participants encoded individual objects at arbitrary screen locations and were subsequently tested on their memory for these locations using a 4-AFC recognition test, encompassing both old/new items and old/new locations. As expected, overall memory was higher for meaningful (real-world) than for meaningless (scrambled) objects. Critically, given correct item identification, the relative correct location memory rates were significantly higher for the meaningful objects. A follow-up study employed only real-world objects that were independently rated for their ‘meaningfulness' and ‘visual complexity’ factors. Once again, object meaning was positively associated with location accuracy, providing a more fine-tuned measure of conceptual influence on visual memory. Finally, using objects with color-meaningful (e.g., red wine) versus color-meaningless (red balloon) features, we found that in contrast to feature-independent theories (Utochkin & Brady, 2020), location memory was more heavily reliant on color memory when the latter was meaningful. Collectively, our findings align with resource-limited theories, suggesting that meaningful stimuli or features allow an enhanced LTM for arbitrary visual details. Follow-up studies will test semantic-visual memory dynamics over longer time-lags.