Purchase this article with an account.
Yuhong V. Jiang, Gustavo A. Vázquez, Tal Makovski; Visual learning in multiple object tracking. Journal of Vision 2008;8(6):225. doi: 10.1167/8.6.225.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Our ability to track a subset of moving objects with attention is severely limited and only a few objects can be tracked simultaneously. This ability, however, can be improved by learning, as tracking improves on displays involving repeated motion trajectories. In this study we examine the source of learning in multiple object tracking, its interaction with selective attention, and the role of temporal sequences in learning. Participants were asked to track 4 designated circles among a total of 8 moving circles. Several different tracking trials were generated and each trial was repeatedly presented 15 to 20 times during training. For each presentation of a tracking trial, the subset designated as the targets was constant during training but the motion started and ended at different moments to prevent participants from learning just the initial or ending positions. Accuracy improved as training progressed. To test whether the improvement was attributable to enhanced familiarity with the repeated displays, in Experiment 1 we tested participants in a transfer session where the same trajectories were used, but a different subset was designated as tracking targets. Results showed that relative to novel trials, tracking in old trials was enhanced only when the subset designated as targets was constant between training and transfer. Learning did not transfer when the same trajectories were used but the targets and nontargets switched roles or were mixed up. Experiment 2 showed, surprisingly, that temporal order of the motion sequence was not part of the learning, as learning fully transferred when the learned trajectories were played backwards. We conclude that visual learning in multiple-object tracking reflects learning of attended trajectories, and that learning is unaffected by prospective coding of motion temporal order.
This PDF is available to Subscribers Only