July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Modeling perceptual learning of visual motion
Author Affiliations
  • Émilien Tlapale
    Department of Cognitive Science, University of California, Irvine, CA 92697
  • Barbara Dosher
    Department of Cognitive Science, University of California, Irvine, CA 92697
  • Zhong-Lin Lu
    Department of Psychology, The Ohio State University, Columbus, OH 43210
Journal of Vision July 2013, Vol.13, 248. doi:https://doi.org/10.1167/13.9.248
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Émilien Tlapale, Barbara Dosher, Zhong-Lin Lu; Modeling perceptual learning of visual motion. Journal of Vision 2013;13(9):248. https://doi.org/10.1167/13.9.248.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Repeated exposure or training on moving stimuli leads to improved performance in tasks such as motion detection or discrimination. Although numerous studies have reported perceptual learning in visual motion, identifying learning mechanisms and their cortical loci remains a major issue. A comprehensive consideration of the existing, and apparently conflicting, literature in a consistent framework is our first step to solve this issue. We incorporate perceptual learning through connectivity reweighting into the dynamical model of Tlapale et al (2010). Since this model, which includes cortical areas dedicated to motion (V1, MT and readouts) and their intra- and inter-area connectivity, has been shown to elicit relevant percepts for a wide variety of motion stimuli, it provides a natural basis to incorporate mechanisms of perceptual learning. The resulting model is then tested on the data of many experiments reported in the literature. We show that a dynamical reweighting model is able to account for various perceptual learning results such as discrimination training (Ball and Sekuler 1982,1987), repeated exposure (Watanabe et al 2001,2002), and the influence of difficulty on learning rate. The existing data can be explained by reweighting the feedforward connectivity of the local motion information (from V1 to MT), confirming the hypothesis of the literature. But reweighting the connectivity on global motion information (from MT to the readout) can also produce results matching the experimental data. To solve this location ambiguity, we propose a motion discrimination task, based on known properties of the visual system in solving the aperture problem. Finally, we generate new predictions of the model for novel stimuli, such as motion transparency detection, or in transfers across different kind of stimuli. As a whole, we present a model of visual motion perceptual learning which describes the existing experiments, and provides new testable predictions to classify mechanisms of motion learning.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.