Perceptual learning is defined as any relatively permanent or consistent change in an observer's perception of a stimulus following experience of that stimulus (Gibson,
1963). Perceptual learning has been shown to occur for a variety of basic visual features, such as orientation and spatial frequency of gratings (Fiorentini & Berardi,
1980) and direction of motion (Ball & Sekuler,
1982). The performance improvements that occur over the course of training tend to follow characteristic patterns that allude to aspects underlying brain plasticity (Ahissar & Hochstein,
2004; Dosher & Lu,
1998; Fahle,
2004; Seitz & Dinse,
2007). For instance, a performance improvement with a stimulus of one spatial orientation may not transfer to another orientation (Fiorentini & Berardi,
1980). It has been argued that this pattern of results is consistent with changes in orientation selectivity of early visual cells (Schoups, Vogels et al.,
2001) and indicates that the locus of learning may be in these early visual areas (Fahle,
2004). However, the argument that specificity of learning demonstrates plasticity in visual cortex is not well supported (Law & Gold,
2008; Xiao, Zhang et al.,
2008) and it is clear that task performance alone is an insufficient measure by which to understand brain mechanisms involved in perceptual learning.
The relationship between psychophysical performance and underlying physiological changes is fundamentally inferential. Typically, perceptual learning is operationalized as improvement in sensitivity (such as threshold or accuracy) or reaction time on a task. These metrics have provided a great deal of insight into the mechanisms of perception and perceptual learning, but the inferential gap between the observer's perception and the final performance measurement remains problematic (Mollon & Danilova,
1996). One aspect of this problem is that sensitivity and reaction time are very gross measures of performance and reveal little detail of observers' perceptual processes. As a result, it is the onus of the experimenter to design clever studies to rule out alternative explanations of changes in these performance metrics. This task is further complicated by the fact that, as we report here, these two metrics can produce opposite patterns under some conditions.
We find an alternative and potentially richer metric in classification images (Ahumada,
2002). In a classification image analysis, observers detect or discriminate a stimulus of interest (the signal) embedded in external noise. In a typical experiment, an observer is presented with a signal-with-noise stimulus and a noise-only stimulus in succession and asked to report which stimulus contained the target image. The noise fields from these stimuli are then grouped and analyzed according to the observer's decisions, ultimately producing a classification image that may be thought of as the mental template that the observer used to classify stimuli during the task. Classification images have been produced from a wide variety of studies and reveal important aspects of perception in both low- and high-level visual tasks (Keane, Lu et al.,
2007; Lu & Liu,
2006; Mareschal, Dakin et al.,
2006; Sekuler, Gaspar et al.,
2004; Shimozaki, Chen et al.,
2007). The advantages of using classification images as a metric of perceptual learning are substantial because classification images encode much more detailed characteristics of an observer's perceptual processes than do sensitivity or reaction time measures.
However, one limitation of classification image methods is that a large number of trials is required to produce a stable classification image. In many studies, upward of 10,000 trials are employed to construct a classification image (Ahumada,
2002; Lu & Liu,
2006; Sekuler et al.,
2004). Such a large number of trials, which need to be spread out across multiple sessions, can be problematic to capture effects of perceptual learning because substantial learning can occur during the acquisition of the classification images. To successfully examine perceptual learning of lower order visual features over the course of several days requires the acquisition of an image in a single experimental session, approximately 1,000 trials. This has been accomplished in the case of perceptual learning of vernier acuity (Li, Levi et al.,
2004), however, that study involved stimuli with only 16 changing parts, thus benefiting from a relatively low-dimensional stimulus space. Generating a classification image of a stimulus that is more extended in space (such as an oriented grating) presents a greater challenge because such images require variations in many hundreds of pixels and involve a much higher dimensional stimulus space.
In this paper, we employ an efficient classification image technique that makes it possible to produce stable images from as little as 512 trials, allowing for an image-based analysis of perceptual learning over the course of several days. We examine both traditional metrics of perceptual learning (such as reaction time, threshold, and accuracy) as well as changes in observers' classification images.