August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Perceptual learning of task mixtures
Author Affiliations
  • Barbara Dosher
    Department of Cognitive Science, University of California, Irvine, CA 92697
  • Wilson Chu
    Department of Cognitive Science, University of California, Irvine, CA 92697
  • Jiajuan Liu
    Department of Cognitive Science, University of California, Irvine, CA 92697
  • Zhong-Lin Lu
    Department of Psychology, The Ohio State University, Columbus, OH 43210\nDepartment of Psychology, University of Southern California, Los Angeles, CA 90089
Journal of Vision August 2012, Vol.12, 767. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Barbara Dosher, Wilson Chu, Jiajuan Liu, Zhong-Lin Lu; Perceptual learning of task mixtures. Journal of Vision 2012;12(9):767.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Traditional perceptual learning has largely focused on learning of a particular stimulus and task. Understanding learning in more general contexts involving multiple stimuli and tasks requires us to study co-learning or interactions between learning of multiple stimuli. Several laboratories (i.e., Yu et al, 2004) report that mixing training – "roving" of stimuli – can disrupt or reduce learning. Perceptual learning in mixed stimulus conditions can be damaged unless tagged in an obvious way. In this experiment, we compare learning with several stimulus combinations intermixed over trials. Observers make orientation judgments (clockwise or counterclockwise) about sets of base angles (±12° about 22.5°, 67.5°, 112.5°, and 157.5°). Four intermixed base angles were each trained in one of four separate retinal locations; for intermixed pairs, each base angle condition was trained in two locations. A new integrated reweighting framework (Dosher et al., 2011) predicts that learning orientation identification in near base angles may interfere more with one another. So far as training stimuli are all mixed and learning happens within the same decision and reweighting structure, the demands of optimizing weights may interact. Orientation channels that weight towards CW for one reference angle may weight towards CCW for another adjacent reference angle. This predicts that roving two stimuli should lead to better learning for widely separated stimuli. The weights on orientation channels can be better optimized if more widely separated, i.e. for base angles 22.5° and112.5°. Our results support these predictions. Even when they occur in separate locations, orientation identification in widely separated base angles are learned far better than two near base angles, implying a role of location-independent representations in perceptual learning. These results also rule out the enhanced representation hypotheses, in which perceptual learning alters low-level representations in each location separately, and so predicts independent learning of each base angle.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.