Purchase this article with an account.
Yoko Higuchi, Jun Saiki; Contextual cueing effect without eye movements. Journal of Vision 2014;14(10):1075. doi: 10.1167/14.10.1075.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Visual search performance is facilitated when fixed spatial configurations are presented repeatedly, an effect known as contextual cueing (Chun & Jiang, 1998). Previous studies revealed that the main contributing factor in contextual cueing is reduced number of saccades in repeated displays (Peterson & Kramer, 2001a; Zhao, Liu, Jiao, Zhou, Li & Sun, 2012). It is possible that fewer saccades in the repeated display reflect that people might learn how to move their eyes in those displays. In the current study, we investigated whether the contextual cueing effect could be obtained without eye movements. Participants were asked to search for a rotated T target among L distracters, and judge whether the target was rotated to the left or right. Two different set sizes (8 and 12) were used in the experiment. Participants were randomly assigned to one of two groups: with-eye-movement or without-eye-movement group. Participants in the without-eye-movement group were instructed to fixate on the center of the display and forbidden to move their eyes during the visual search task, while participants in the with-eye-movement group could freely move their eyes. The results of the with-eye-movement group showed a significant contextual cueing effect in set size 8 displays and a marginal contextual cueing effect in set size 12 displays. Furthermore, the results of the without-eye-movement group also showed a significant contextual cueing effect in both set size 8 and 12 displays. Effect sizes of contextual cueing were not significantly different across two groups. The participants in both groups could not recognize the repeated display. These results indicate that contextual cueing is robustly obtained without eye movements, suggesting that spatial configuration could be implicitly learned regardless of how to move eyes.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only