September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Perceptual learning viewed as a statistical modeling process – Is it all overfitting?
Author Affiliations
  • Dov Sagi
    The Weizmann Institute of Science, Israel
  • Hila Harris
    The Weizmann Institute of Science, Israel
Journal of Vision September 2011, Vol.11, 11. doi:10.1167/11.11.11
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dov Sagi, Hila Harris; Perceptual learning viewed as a statistical modeling process – Is it all overfitting?. Journal of Vision 2011;11(11):11. doi: 10.1167/11.11.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Performance gains obtained through perceptual learning are, surprisingly, specific to the trained condition. Recent research shows that specificity increases with training and with task precision (Jeter et al. 2009/10), and generalizes across tasks and features trained in temporal proximity (Yu and colleagues). Such results are expected if perceptual learning involves statistical modeling of the task at hand, with variations in brain anatomy (Mollon & Danilova, 1996), or neuronal response, limiting the reliability of the fitted data. When training is carried out with a limited set of stimuli (e.g. a single contrast), overfitting may gradually arise, thus predicting failures when new conditions are presented. In the contrast domain, learning is specific to the trained contrast, and much reduced when different contrasts are mixed during training (Adini et al., 2004; Yu et al., 2004), demonstrating that learning is nothing but overfitting. Overfitting may arise when learning involves the readout of sensory neurons (Lu & Dosher), reweighting responses according to the peculiarities of the trained condition. To test the generality of this theoretical approach, we re-examined the specificity of learning to retinal location. Using the texture discrimination task (Censor & Sagi, 2009), we had observers practicing a target positioned either at a fixed location (the traditional way) or at one of two locations. Against overfitting, we find equal learning in both conditions, but most surprisingly, in agreement with overfitting, while the 1-location training was specific as expected, the 2-locations training completely transferred to locations previously untrained, nor tested. Theoretical implications will be presented.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×