While Hung and Seitz (
2014) showed empirically that transfer in perceptual learning can vary based upon simply changing between single and multiple staircases during training, such findings can be better understood within the context of a computational model where such manipulations can be more systematically investigated and mechanisms involved can be identified. A large number of models of perceptual learning have been constructed with the aim to understand mechanisms of perceptual learning and provide a good basis for conducting such an examination. Most models are based upon a reweighting mechanism where learning is accomplished by changing weights of readout from sensory representations to decision units. Weiss, Edelman, and Fahle (
1993) built a biologically plausible hyper basis function network reweighting model, which used nonmodifiable stable basis function as sensory representation neurons, and showed that basic perceptual learning can be accomplished by changing the weights connecting these representation units to the decision unit. Sotiropoulos, Seitz, and Seriès (
2011) followed up this model by constructing an enhanced reweighting model that could explain a wider range of results of transfer and interference in perceptual learning. Dosher and Lu (
1998) showed that learning of orientation discrimination in noise could be accomplished by a stimulus-enhancing mechanism that excludes the environmental noise from sensory representations. This model can account for the results of several psychophysics experiments showing disruption (Petrov, Dosher, & Lu,
2005; Seitz et al.,
2005), specificity and transfer of learning across tasks (Webb, Roach, & McGraw,
2007), and has been a mainstay in the field of perceptual learning due to its explanatory power. Recently, a new instantiation of the reweighting model was proposed by Dosher, Jeter, Liu, and Lu (
2013) to account for transfer to new retinal locations. This model, called the integrated reweighting theory (IRT), is a multi-level learning system in which location transfer is mediated through location-independent representations. Stimulus feature transfer is determined by similarity of representations at both location-specific and location-independent levels.