September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Variability influences generalization in implicit learning of spatial configurations
Author Affiliations
  • Yoko Higuchi
    Graduate School of Informatics, Nagoya University
  • Yoshiyuki Ueda
    Kokoro Research Center, Kyoto University
  • Jun Saiki
    Graduate School of Human and Environmental Studies, Kyoto University
Journal of Vision September 2018, Vol.18, 264. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yoko Higuchi, Yoshiyuki Ueda, Jun Saiki; Variability influences generalization in implicit learning of spatial configurations. Journal of Vision 2018;18(10):264. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Implicit learning of spatial configurations facilitates visual search performance—a phenomenon known as contextual cueing. Studies have demonstrated that contextual cueing occurs for variable configurations that items are slightly jittered across repetitions. Humans may be capable of extracting spatial regularity from variable instances and applying the regularity to a new instance implicitly. Generalization should occur when the learned representation and a new instance are similar, but it remains unclear how similarity is computed in implicit learning. The current study investigated whether similarity metrics include the effect of variability by using contextual cueing paradigm. Participants were asked to search for a rotated T target among L distractors, and to judge whether the target was rotated to the left or right. During the learning phase, the similar distractor arrangements were presented repeatedly so that participants could learn the spatial regularity. In Experiment 1, the distractor locations were slightly jittered in some configurations, while they were invariant in the other configurations. We found that learning from the fixed configurations does not generalize to a new similar configuration, while learning from the jittered configurations does. Experiment 2 expands these findings by using Gaussian distributions of different jitter ranges. The results showed that learning generalizes more widely when the jitter range was large than when it was small. These results demonstrated that spatial variability during learning did influence subsequent generalization in contextual cueing, and suggest that similarity between the learned representation and a new instance is computed based on the variability in learning.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.