Abstract
Implicit learning of spatial configurations facilitates visual search performance—a phenomenon known as contextual cueing. Studies have demonstrated that contextual cueing occurs for variable configurations that items are slightly jittered across repetitions. Humans may be capable of extracting spatial regularity from variable instances and applying the regularity to a new instance implicitly. Generalization should occur when the learned representation and a new instance are similar, but it remains unclear how similarity is computed in implicit learning. The current study investigated whether similarity metrics include the effect of variability by using contextual cueing paradigm. Participants were asked to search for a rotated T target among L distractors, and to judge whether the target was rotated to the left or right. During the learning phase, the similar distractor arrangements were presented repeatedly so that participants could learn the spatial regularity. In Experiment 1, the distractor locations were slightly jittered in some configurations, while they were invariant in the other configurations. We found that learning from the fixed configurations does not generalize to a new similar configuration, while learning from the jittered configurations does. Experiment 2 expands these findings by using Gaussian distributions of different jitter ranges. The results showed that learning generalizes more widely when the jitter range was large than when it was small. These results demonstrated that spatial variability during learning did influence subsequent generalization in contextual cueing, and suggest that similarity between the learned representation and a new instance is computed based on the variability in learning.
Meeting abstract presented at VSS 2018