Purchase this article with an account.
Lei Zheng, Jan-Gabriel Dobroschke, Stefan Pollmann; Contextual cueing survives allocentric and egocentric transformations. Journal of Vision 2021;21(9):1975. doi: https://doi.org/10.1167/jov.21.9.1975.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Introduction: Visual search can be guided by incidentally learnt spatial configurations. This contextual cueing survives transformations like rotation (Zheng & Pollmann, 2019). Here we investigate if contextual cueing can guide search after allocentric or egocentric transformation. Short-lived trial-to-trial priming speeds visual search after both kinds of transformation (Ball et al., 2017). In contrast, the long-lasting effects of target location probability cueing occurred in an egocentric reference frame (Jiang & Swallow, 2013). To the best of our knowledge, the dependence of contextual cueing on egocentric or allocentric reference frames has not been investigated, although the robustness of contextual cueing to rotation may suggest independence of egocentric representation. Methods: In two experiments, participants (n=24 resp. n=25) repeatedly searched for a T among L-shapes in twelve displays surrounded by an upright or tilted frame. Subsequently, unknown to them, we set up four conditions: fully repeated displays (same as in the training phase), identical displays but rotated frame (only egocentric reference preserved), frame unchanged but display rotated (only allocentric reference preserved), and newly generated displays. Experiment 2 was designed in the same way as Experiment 1 but controlled the targets’ absolute locations to rule out target probability cueing. Result: Preserved egocentric reference frames (Experiment 1: t(23)=4.26, p<.001 and Experiment 2: t(24)=4.14, p<.001) speeded search relative to new displays. Preserved allocentric reference frames likewise speeded search (as a tendency in Experiment 1: t(23)=4.14, p=0.025, significantly in Experiment 2 (t(24)=3.87, p<.001). Conclusion: The data show that disrupting either the allocentric or egocentric reference frame could not eliminate the search advantage for incidentally learned displays, as long as the other cue was preserved. Our finding suggests that egocentric and allocentric representations parallelly exist in implicit spatial memory, which lends additional support for the ‘two-system’ model of spatial memory (Burgess, 2006).
This PDF is available to Subscribers Only