Purchase this article with an account.
Barbara Dosher, Wilson Chu, Jiajuan Liu, Zhong-Lin Lu; Perceptual learning of task mixtures. Journal of Vision 2012;12(9):767. doi: https://doi.org/10.1167/12.9.767.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Traditional perceptual learning has largely focused on learning of a particular stimulus and task. Understanding learning in more general contexts involving multiple stimuli and tasks requires us to study co-learning or interactions between learning of multiple stimuli. Several laboratories (i.e., Yu et al, 2004) report that mixing training – "roving" of stimuli – can disrupt or reduce learning. Perceptual learning in mixed stimulus conditions can be damaged unless tagged in an obvious way. In this experiment, we compare learning with several stimulus combinations intermixed over trials. Observers make orientation judgments (clockwise or counterclockwise) about sets of base angles (±12° about 22.5°, 67.5°, 112.5°, and 157.5°). Four intermixed base angles were each trained in one of four separate retinal locations; for intermixed pairs, each base angle condition was trained in two locations. A new integrated reweighting framework (Dosher et al., 2011) predicts that learning orientation identification in near base angles may interfere more with one another. So far as training stimuli are all mixed and learning happens within the same decision and reweighting structure, the demands of optimizing weights may interact. Orientation channels that weight towards CW for one reference angle may weight towards CCW for another adjacent reference angle. This predicts that roving two stimuli should lead to better learning for widely separated stimuli. The weights on orientation channels can be better optimized if more widely separated, i.e. for base angles 22.5° and112.5°. Our results support these predictions. Even when they occur in separate locations, orientation identification in widely separated base angles are learned far better than two near base angles, implying a role of location-independent representations in perceptual learning. These results also rule out the enhanced representation hypotheses, in which perceptual learning alters low-level representations in each location separately, and so predicts independent learning of each base angle.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only