September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Contextual cueing survives allocentric and egocentric transformations
Author Affiliations & Notes
  • Lei Zheng
    Psychology, Otto-von-Guericke-University, Magdeburg, Germany
  • Jan-Gabriel Dobroschke
    Psychology, Otto-von-Guericke-University, Magdeburg, Germany
  • Stefan Pollmann
    Psychology, Otto-von-Guericke-University, Magdeburg, Germany
    Center of Brain and Behavioral Sciences, Magdeburg, Germany
    Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
  • Footnotes
    Acknowledgements  This work was supported in part by the China Scholarship Council (NO.201808120093).
Journal of Vision September 2021, Vol.21, 1975. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lei Zheng, Jan-Gabriel Dobroschke, Stefan Pollmann; Contextual cueing survives allocentric and egocentric transformations. Journal of Vision 2021;21(9):1975.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Introduction: Visual search can be guided by incidentally learnt spatial configurations. This contextual cueing survives transformations like rotation (Zheng & Pollmann, 2019). Here we investigate if contextual cueing can guide search after allocentric or egocentric transformation. Short-lived trial-to-trial priming speeds visual search after both kinds of transformation (Ball et al., 2017). In contrast, the long-lasting effects of target location probability cueing occurred in an egocentric reference frame (Jiang & Swallow, 2013). To the best of our knowledge, the dependence of contextual cueing on egocentric or allocentric reference frames has not been investigated, although the robustness of contextual cueing to rotation may suggest independence of egocentric representation. Methods: In two experiments, participants (n=24 resp. n=25) repeatedly searched for a T among L-shapes in twelve displays surrounded by an upright or tilted frame. Subsequently, unknown to them, we set up four conditions: fully repeated displays (same as in the training phase), identical displays but rotated frame (only egocentric reference preserved), frame unchanged but display rotated (only allocentric reference preserved), and newly generated displays. Experiment 2 was designed in the same way as Experiment 1 but controlled the targets’ absolute locations to rule out target probability cueing. Result: Preserved egocentric reference frames (Experiment 1: t(23)=4.26, p<.001 and Experiment 2: t(24)=4.14, p<.001) speeded search relative to new displays. Preserved allocentric reference frames likewise speeded search (as a tendency in Experiment 1: t(23)=4.14, p=0.025, significantly in Experiment 2 (t(24)=3.87, p<.001). Conclusion: The data show that disrupting either the allocentric or egocentric reference frame could not eliminate the search advantage for incidentally learned displays, as long as the other cue was preserved. Our finding suggests that egocentric and allocentric representations parallelly exist in implicit spatial memory, which lends additional support for the ‘two-system’ model of spatial memory (Burgess, 2006).


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.