Free
Research Article  |   April 2009
Contrast-reversal abolishes perceptual learning
Author Affiliations
Journal of Vision April 2009, Vol.9, 20. doi:https://doi.org/10.1167/9.4.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zahra Hussain, Allison B. Sekuler, Patrick J. Bennett; Contrast-reversal abolishes perceptual learning. Journal of Vision 2009;9(4):20. https://doi.org/10.1167/9.4.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We tested the effects of contrast reversal on perceptual learning in a 10AFC texture identification task. Four groups of subjects performed the task on two consecutive days. One group saw the same textures on both days, whereas three other groups saw novel, rotated (180 deg), or contrast-reversed textures on the second day. Response accuracy improved during the first day in all groups. Accuracy decreased significantly at the start of Day 2 in the groups who saw novel, rotated or contrast-reversed textures, but not in the group who saw the same textures. Moreover, the drop in performance was the same in the groups who saw novel, rotated, and contrast-reversed textures. Control experiments showed that making subjects aware of the stimulus transformations at the start of either the first or second day did not alter the results. Hence, the effects of contrast-reversal and 180 deg rotation on the generalization of learning were the same as the effect of using novel stimuli, and knowledge of the stimulus transformation did not reduce their effects. We consider the implications of this pattern of results for the neural mechanisms recruited during the identification and learning of two-dimensional visual patterns.

Introduction
Practice improves performance in a variety of visual tasks. With basic low-level tasks, which require simple stimuli to be discriminated about a single attribute such as orientation or spatial frequency, the effects of practice typically are specific to the trained items (Ball & Sekuler, 1987; Crist, Kapadia, Westheimer, & Gilbert, 1997; Fahle & Morgan, 1996; Fiorentini & Berardi, 1981; Matthews, Liu, Geesaman, & Qian, 1999; Schoups, Vogels, & Orban, 1995), and, in those situations where it has been tested, to the particular locations of the items during training (Fahle, Edelman, & Poggio, 1995; Karni & Sagi, 1991; Schoups et al., 1995; Sowden, Rose, & Davies, 2002). These behavioral effects suggest that perceptual learning may alter the response properties of early visual areas, where cells are retinotopically organized and encode basic attributes of visual stimuli (Crist et al., 1997; Fahle, 2004; Gilbert, 1994; Karni & Bertini, 1997). Indeed, physiological and neuroimaging studies reveal changes to response properties of neurons in primary visual cortex (V1) after training on low-level tasks (Pourtois, Rauss, Vuilleumier, & Schwarz, 2008). Similarly, the behavioral effects obtained with complex patterns offer potential insights to the visual mechanisms that encode those patterns, and are engaged during learning. 
Complex visual patterns can be discriminated on the basis of more than one attribute, such as the shape, location and orientation of features. As with simple stimuli, practice improves the discrimination and identification of complex, and even completely unfamiliar objects. For example, practice significantly improves response accuracy on a 10-AFC texture identification task (Gold, Bennett, & Sekuler, 1999; Gold, Sekuler, & Bennett, 2004), and the effects of practice are much greater for familiar textures (i.e., those seen during practice) than novel textures with similar spatial properties as the trained items (Hussain, Bennett, & Sekuler, 2005). In other words, perceptual learning in a texture identification task exhibits stimulus-specificity that is similar to the specificity reported for the aforementioned low-level tasks. With simple visual tasks, the effects of learning are clearly restricted to the single attribute of the stimulus that is being discriminated, with relatively straightforward inferences regarding the neural representation. With the identification of complex patterns such as two-dimensional textures, it is less clear what is being learned. Learning of complex patterns could involve changes at the earliest neural levels, where simple stimulus attributes are coded, or later in the visual pathway, where entire objects are represented (Desimone, Albright, Gross, & Bruce, 1984; Logothetis, Pauls, & Poggio, 1995), or both. In this paper we investigate the question of what subjects learn about a given set of textures, and use the pattern of learning to consider the possible neural representations. 
Response classification studies have shown that learning increases the efficiency with which subjects extract information from localized regions within the textures (Gold et al., 2004). One possibility is that subjects learn the locations and/or shapes of the most informative blobs within each texture. By examining the extent to which learning transfers across various stimulus transformations, we can gain insight into the learned representations. If the learned representation codes for shape and location, performance after learning should be invariant to image transformations such as contrast-reversal that preserve the locations and shapes of the texture blobs (see Figure 1, Set A vs. Arev). This manipulation leaves intact the spatial distribution of information within the image, with the contrast-defined features remaining in the same location across all images. Similarly, if the learned representation codes for global shape in an orientation-invariant manner, then we should expect performance after learning to be invariant to image transformations such as 180-deg rotations (see Figure 1, Set A vs. Arot). Here, we assess the effects of such image transformations on perceptual learning of texture identification. 
Figure 1
 
Examples of the texture stimuli. Each texture was created by applying an isotropic, band-pass (2–4 cy/image) ideal spatial frequency filter to Gaussian white noise. The first row shows five of the ten textures from Set A. The second row shows the same five textures reversed in contrast. The third row shows the textures from the first row after being rotated by 180 deg.
Figure 1
 
Examples of the texture stimuli. Each texture was created by applying an isotropic, band-pass (2–4 cy/image) ideal spatial frequency filter to Gaussian white noise. The first row shows five of the ten textures from Set A. The second row shows the same five textures reversed in contrast. The third row shows the textures from the first row after being rotated by 180 deg.
Methods
Subjects
Fifty-six McMaster University undergraduate students participated in this experiment. All subjects had normal or corrected-to-normal Snellen visual acuity. The mean age and years of education were, respectively, 19.48 ( SD = 2.59) and 15.37 ( SD = 2.54). All subjects were compensated for their participation with a small stipend ($10/hour) or partial course credit for participating in the experiment, and all subjects were naive with respect to the aims of the experiment and had no previous experience in this task. 
Apparatus and stimuli
Stimuli were generated in Matlab (v. 5.2) using the Psychophysics and Video Toolboxes (Brainard, 1997; Pelli, 1997), and displayed on a 21″ Sony Trinitron monitor (1024 × 768 pixels) at a frame rate of 85 Hz. Average luminance was 62.5 cd/m2. Display luminance was measured with a PhotoResearch PR650 photometer, and the calibration data were used to build a 1779-element lookup table (Tyler, Liu, McBride, & Kontsevich, 1992). Customized computer software constructed the stimuli on each trial by selecting the appropriate luminance values from the calibrated lookup table and storing them in the display's eight-bit lookup table. 
The textures were band-limited noise patterns created by applying an isotropic, ideal band-pass (2–4 cy/image) spatial frequency filter to white Gaussian noise ( Figure 1). Stimulus size was 256 × 256 pixels, subtending 4.8 × 4.8 deg of visual angle from the viewing distance of 114 cm. Two sets (A and B) of ten textures were created, as well as contrast-reversed (A rev, B rev) and 180 deg rotated (A rot, B rot) versions of each set. During the experiment, stimulus contrast was varied across trials using the method of constant stimuli. Seven levels of contrast were spaced equally on a logarithmic scale across a range that was sufficient to produce significant changes in performance in virtually all subjects. The textures were shown in one of three levels (low, medium and high) of static two-dimensional Gaussian noise (contrast variance = .001, .01, or .1). Hence, subjects viewed each texture at a signal-to-noise ratio that varied significantly across trials. There were 21 different stimulus conditions (seven contrast levels × three external noise levels), and these 21 conditions were randomly intermixed within a session. 
Procedure
All subjects performed two sessions of a texture identification task at the same time on consecutive days. There were four groups, each consisting of 14 subjects. The Same group saw the same set of ten textures on both days: Seven subjects saw set A on both days, and seven subjects saw set B on both days. The Novel group transferred across sets A and B from one day to the next: From Day 1 to Day 2, seven subjects transferred from Set A to Set B, and seven subjects transferred from Set B to Set A. The Contrast-Reversal group performed Day 1 with one set of textures (i.e., A, B, A rev, or B rev) and Day 2 with the same set of ten textures reversed in contrast polarity. At least three subjects were assigned to each order (e.g., A-A rev, B-B rev, A rev-A, or B rev-B), with an additional two subjects randomly assigned to one of the four orders. Finally, the Rotation group performed Day 1 with one set of textures (A, B, A rot, B rot) and Day 2 with that same set rotated by 180 deg. At least three subjects were assigned to each order (e.g., A-A rot, B-B rot, A rot-A, or B rot-B), with an additional two subjects randomly assigned to one of the four orders. Subjects in the Novel, Contrast-Reversal, and Rotation groups were not informed that the stimuli on Day 2 differed from those seen during Day 1. 
Subjects were seated in a darkened room 114 cm away from the monitor. Viewing was binocular, and viewing position and distance were stabilized with an adjustable chin-rest. The experiment started after a 60 s period during which the subject adapted to the average luminance of the display. A trial began with the presentation of a black, high-contrast fixation point (0.15 × 0.15 deg) in the center of the screen for 100 ms, followed by a texture, selected randomly from one of the 21 stimulus conditions, presented for 200 ms at the center of the screen, i.e., foveally. After the texture disappeared, the entire set of 10 textures was presented as noiseless, high-contrast thumbnail images, each subtending 1.7 × 1.7 deg of visual angle. Five thumbnails were presented on the top half of the screen, and five on the bottom half, and the location of each texture in the response window was constant across trials and days. The subject's task was to inspect the thumbnail images, and decide which one of the 10 textures had been presented during the trial by clicking on the chosen texture with a computer mouse. Auditory feedback in the form of high-pitched (correct) and low-pitched (incorrect) tones informed the subject about the accuracy of each response, and the next trial began one second after presentation of the feedback. Sessions on both days comprised 40 trials per stimulus condition for a total of 840 trials. The duration of each session was 60 minutes. 
Results
We calculated average proportion correct (collapsed across noise and contrast levels) on Days 1 and 2 for each of the four groups ( Figure 2). The groups did not differ in overall accuracy during Day 1, F(3, 52) = 1.26, p = .3, but there was a significant difference across groups on Day 2, F(3, 52) = 2.99, p = .039. Between-session learning, defined as the difference between proportion correct on Days 1 and 2, also differed significantly across groups, F(3, 52) = 7.10, p = .0004. A Tukey HSD test ( p < .05) on between-session learning indicated that the Same group differed from all of the other groups; none of the other differences were significant. Therefore between-session learning was greatest in the Same group, and did not differ among the other three groups. This latter result suggests that contrast-reversal and rotation had the same effect on generalization of learning as replacing the textures with a novel set. 
Figure 2
 
Proportion correct on Day 1 and Day 2 for each of the four groups.
Figure 2
 
Proportion correct on Day 1 and Day 2 for each of the four groups.
The time-course of within-session learning was examined by measuring the proportion of correct responses in eight separate bins of 105 trials on Days 1 and 2 ( Figure 3). An ANOVA on response accuracy scores measured on Day 1 yielded a significant main effect of Bin, F(7, 364) = 91.02, p < .00001. The main effect of Group was not significant F(3, 52) = 1.25, p = .297, nor was the Group × Bin interaction, F(21, 364) = .98, p = .48. Within-session learning, defined as the difference between response accuracy in the first and last bins, also did not differ across groups, F(3, 52) = 1.62, p = .195. However, an ANOVA on response accuracy scores measured on Day 2 did find a significant main effect of Group, F(3, 52) = 2.98, p = .039, indicating that response accuracy was higher in the Same group than the other groups. As was found with the Day 1 data, the main effect of Bin was significant F(7, 364) = 61.35, p < .00001, but the Group × Bin interaction was not, F(3, 364) = 1.30, p = .167. Finally, within-session learning on Day 2 did not vary across groups, F(3, 52) = .70, p = .55. These analyses suggest that within-session learning was similar across groups on both days. 
Figure 3
 
Time-course of learning on Day 1 and Day 2 for the four groups. Proportion correct is calculated at eight bins within each session, with each bin comprising 105 trials.
Figure 3
 
Time-course of learning on Day 1 and Day 2 for the four groups. Proportion correct is calculated at eight bins within each session, with each bin comprising 105 trials.
Figure 3 shows that there was a drop in performance from Bin 8 to Bin 9 (across days) for three of the four groups. The drop in performance was significant for the Novel, t(13) = −3.81, p = .002, Contrast-Reversal, t(13) = −2.81, p = .01, and Rotation, t(13) = −6.37, p < .0001, groups, but not for the Same group, t(13) = .71, p = .487. 
Relative to Bin 1 (i.e., initial accuracy on Day 1), performance in Bin 9 (i.e., initial accuracy on Day 2) was 8% higher for the Novel group, t(13) = 2.98, p = .01, 11% higher for the Contrast-Reversal group, t(13) = 3.64, p = .002, and 9% higher for the 180 deg rotation group, t(13) = 2.75, p = .01. For the Same group, performance at Bin 9 was 30% higher than at Bin 1, t(13) = 7.29, p < .0001. Therefore, although performance at Bin 9 was much greater for the Same group, the other groups did show some transfer-of-learning relative to completely naive performance. A one-way ANOVA on the difference scores between Bin 9 and Bin 1 indicated a significant effect of Group, F(3, 52) = 8.39, p < .001, and a Tukey HSD test indicated that the difference scores for the Same group were significantly greater than each of the other groups ( p < .01), whereas none of the other groups differed from each other. 
For each subject, response accuracy was calculated for each texture during each session. In each group, proportion correct for each item was then averaged across subjects. Because two sets of ten items were presented to each group, this procedure yielded twenty averaged values, each based on the results from seven subjects, for each session. These values are shown in Figure 4 for the Contrast-Reversal, Rotation, and Same groups. In the Same group, response accuracy for individual textures was correlated strongly across sessions, r(18) = .88, p < .0001, indicating that the relative difficulty of correctly identifying individual textures was consistent across sessions. Moreover, this plot also indicates that accuracy for every texture in the Same group improved across sessions. In the Contrast-Reversal and Rotation groups, the between-session improvement was smaller and less consistent across stimuli (i.e., there was less learning). Nevertheless, response accuracy for individual items was strongly correlated in both of these groups (Contrast-Reversal: r(18) = .77, p < .0001; Rotation: r(18) = .86, p < .0001). In other words, contrast-reversal and rotation reduced learning but did not alter the relative difficulty of correctly identifying individual textures. 
Figure 4
 
Response accuracy for individual stimuli during Days 1 and 2. (a) Circles: Same textures group, (b) Triangles: Contrast-Reversal group, (c) Squares: Rotation group. Each point shows accuracy for a given texture, averaged across seven subjects. The dashed line in each panel represents equal performance in both sessions; the solid line is the best-fitting (least-squares) line. The Same group is the only group for which all points are above the dashed line: response accuracy for every texture increased across sessions. However, all groups show a significant positive correlation between Days 1 and 2.
Figure 4
 
Response accuracy for individual stimuli during Days 1 and 2. (a) Circles: Same textures group, (b) Triangles: Contrast-Reversal group, (c) Squares: Rotation group. Each point shows accuracy for a given texture, averaged across seven subjects. The dashed line in each panel represents equal performance in both sessions; the solid line is the best-fitting (least-squares) line. The Same group is the only group for which all points are above the dashed line: response accuracy for every texture increased across sessions. However, all groups show a significant positive correlation between Days 1 and 2.
Effect of prior instructions
We examined the possibility that explicit awareness of the stimulus transformations might overcome the drop in performance from Day 1 to Day 2 found with the contrast-reversal and the rotation groups. Two control groups were tested in each condition. The Post-training Control groups were shown printed examples of a texture stimulus and the contrast-reversed (or rotated) version of that stimulus prior to the start of the session on Day 2, after having trained with the textures on Day 1. They were told that the textures they had learned the previous day were now altered according to the shown example. Therefore, this group was aware that the textures they were seeing on Day 2 were not completely novel. The Pre-training Control groups were shown the same examples prior to training on Day 1. These subjects were told that the textures that they would see during the first session would be thus altered when they returned to perform the task the next day. Therefore, this group had the opportunity to adapt their initial learning strategy to compensate for the expected stimulus change. Care was taken to ensure that the control subjects understood the stimulus transformations. The stimuli and procedures used were the same as described earlier. Thirty-three subjects participated in these control conditions: eight subjects in each of the two contrast-reversal control groups and the rotation Post-training Control group, and nine subjects in the rotation Pre-training Control group. 
Figure 5 shows the effect of prior instructions. We analyzed whether the instructions affected learning by separately comparing the three contrast-reversal groups (no instructions, Pre-training Control Group and Post-training Control Group), and the three rotation groups. Between-session learning (Average Day 2-Average Day 1) did not differ across the three contrast-reversal groups, F(2, 27) = .46, p = .633, nor did the difference between response accuracy in bins 8 and 9, F(2, 27) = .46, p = .633. Comparisons of performance in the three rotation stimuli groups (i.e., no instructions, Pre-training Control Group and Post-training Control Group) yielded similar results: neither between-session learning, F(2, 28) = .98, p = .38, nor the difference between bins 8 and 9, F(2, 28) = 1.626, p = .21, varied significantly among the groups. Hence, there was no evidence that the effect of contrast-reversal and rotation was reduced in either of the control groups who were instructed as to the stimulus transformations on Days 1 and 2. 
Figure 5
 
Effect of prior instructions on the drop in performance across days. Top panel: Difference between average accuracy on Day 2 and Day 1. Bottom panel: Difference between accuracy at Bin 9 and Bin 8. Data are shown for the Same group, and the three groups tested each in the Contrast-reversal and the Rotation conditions. Error bars represent 1 standard error.
Figure 5
 
Effect of prior instructions on the drop in performance across days. Top panel: Difference between average accuracy on Day 2 and Day 1. Bottom panel: Difference between accuracy at Bin 9 and Bin 8. Data are shown for the Same group, and the three groups tested each in the Contrast-reversal and the Rotation conditions. Error bars represent 1 standard error.
Discussion
We found that perceptual learning of texture identification did not generalize across contrast-reversal or 180 deg rotation. The large between-session learning found for the Same group (19%), was not found with the Novel, Rotation and Contrast-reversal groups; between-session learning for these three groups (approximately 5%), did not differ. Also, there was a significant, and equivalent drop in performance between Bins 8 and 9 for the Novel, Rotation, and Contrast-Reversal groups, unlike the Same group, whose performance did not decrease. These findings suggest that the effects of contrast-reversal and 180 deg rotation on the generalization of learning were similar to the effect of using a novel set of textures. In addition, the fact that performance of the control groups did not differ from that of the Contrast-Reversal and Rotation groups means that knowledge of the transformations applied to the textures did not alter their effects on generalization of learning, even when observers had an opportunity to alter their learning strategies in advance of training. Note that where performance dropped in Bin 9, it was still about 10% higher than that measured in Bin 1. That is, transforming the stimuli, or using a novel set of textures did not reduce performance to that measured initially on Day 1. We attribute the 10% savings to familiarization with the task demands, and the type of stimuli used in these experiments, with which the subjects had absolutely no experience in Bin 1. The difference between the Same group, and the other three groups in Bin 9 represents the stimulus-specific component of learning. 
The effects of 180 deg rotation on the identification of similar texture patterns as the ones used here have been shown previously (Husk, Bennett, & Sekuler, 2007; Hussain, Bennett, & Sekuler, 2006). It is known from these studies that perceptual learning of textures is orientation-specific, although slight benefits do transfer to novel textures (i.e., the task-general benefits described above). The current experiment confirms this result, and shows that in addition to being orientation- and exemplar-specific, perceptual learning of texture patterns is sensitive to contrast polarity, suggesting that the learned representation includes information that goes beyond simply shape and location of stimulus features, and that the nature of the learned representation is not altered by the introduction of prior knowledge about potential image transformations. From previous work we know that with learning, the templates used in texture identification gradually become more ideal, i.e., observers use more of the available information (Gold et al., 2004), which can also be modeled as better extraction of the relevant signal (Gold et al., 1999). We now know that the optimization of templates due to learning is precise to the exposed version of the items, even when the location of differentiating features is unchanged within the stimulus set. An ideal observer would attain equal sensitivity for a given set of textures and its contrast-reversed version because the relative discriminability of items within the set is unchanged after contrast reversal. Although human performance does confirm that the relative discriminability of the stimuli is intact after rotation and contrast reversal (Figure 4), the transformed items must nevertheless be re-learned. 
The current results are inconsistent with recent claims that the detrimental effects of contrast-reversal on identification are unique to faces (Nederhouser, Yue, Mangini, & Biederman, 2007). Based on experiments comparing perceptual matching of faces with perceptual matching of shaded three-dimensional objects that were designed to have the same surface properties as faces, Nederhouser et al. (2007) suggested that “the representation that mediates the recognition of faces, unlike those for any other class of objects, is uniquely sensitive to contrast polarity. Human recognition of non-face objects is not sensitive to changes in contrast polarity…” (p. 2141). Our results are inconsistent with this claim, and show that the effects of contrast-reversal can be obtained with two-dimensional patterns that share neither the surface properties of faces nor the within-object class structural homogeneity of faces. Thus, neither the presence of contrast-reversal effects nor inversion effects are behavioral markers of face-specific perceptual processes. 
Contrast-reversal and stimulus rotation have also been shown to disrupt the learning of texture segregation, where the task required subjects to detect the presence of a single counterphase gabor element embedded within a grid of 16 gabor elements (Grieco, Casco, & Roncato, 2006). In the segregation task used by Grieco et al. (2006), the contrast-polarity of the target was the only discriminating feature, making the task a relatively low-level task. Grieco et al. (2006) showed that perceptual learning of the segregation task was specific to the contrast polarity and orientation of the target and background, leading the authors to infer the neural locus of learning to be odd-symmetric simple cells early in the visual pathway. Such an inference is less straightforward with respect to the present data, because the task used here was a high-level task involving complex patterns that can be identified on the basis of multiple attributes. The representation of complex patterns is thought to occur later in the visual pathway, beyond area V1, where cells are selective for the entire object in addition to single attributes of the object such as orientation (Desimone et al., 1984; Logothetis et al., 1995; Sáry, Vogels, Kovács, & Orban, 1995; Tanaka, Saito, Fukada, & Moriya, 1991). In monkeys, cells from higher areas such as inferotemporal cortex (IT) are recruited during learning of unfamiliar two-dimensional patterns, and the responses of these cells are view-dependent, but invariant to changes in scale or location (Logothetis et al., 1995). The properties of these cells accord with the object-specificity and scale-invariance of object learning shown in humans (Furmanski & Engel, 2000), and with the item-specificity of texture learning shown here and elsewhere (Hussain et al., 2005). Additionally, studies with behaving monkeys have shown that although cells sensitive to contrast polarity are present in V1, the proportion of such cells is much larger beyond area V1 (Ito, Fujita, Tamura, & Tanaka, 1994; Zhou, Friedman, & von der Heydt, 2000). At least one study has explicitly suggested the involvement of inferotemporal cortex (IT) in coding the contrast polarity of complex patterns (Ito et al., 1994). Therefore, although the present results are consistent with those of Grieco et al. (2006) in implicating even- and odd-symmetric simple cells in texture learning, we differ in suggesting that learning of this task could just as well be mediated by neurons later in visual processing, in areas such as IT. 
Conclusion
Stimulus-specific effects of perceptual learning, when found with simple visual stimuli, in tasks that require discrimination along a single attribute such as orientation or direction of motion, are typically interpreted as the involvement of early visual areas in learning (Crist et al., 1997; Fahle, 2004; Gilbert, 1994; Karni & Bertini, 1997). Here, we show two types of specificity in learning of complex patterns: orientation specificity and contrast polarity specificity, and discuss how, aside from ostensible changes in early visual cortex, such effects could arise from learning in later visual areas, such as inferotemporal cortex. The principle of stimulus specificity in learning clearly manifests itself across a range of stimulus complexities, and the specificity of coding serves as a constraint even for stimuli as complex as two-dimensional textures. 
Acknowledgments
We thank Donna Waxman for her invaluable help in conducting these experiments. We also thank the reviewers for their comments on the manuscript. This research was supported by grants from the Natural Sciences and Engineering Council of Canada (NSERC), and the Canada Research Chair (CRC) program. 
Commercial relationships: none. 
Corresponding author: Zahra Hussain. 
Email: hussaiz@mcmaster.ca. 
Address: Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main Street West, Hamilton, ON, Canada L8S 4K1. 
References
Ball, K. Sekuler, R. (1987). Direction-specific improvement in motion discrimination. Vision Research, 27, 953–965. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Crist, R. E. Kapadia, M. K. Westheimer, G. Gilbert, C. D. (1997). Perceptual learning of spatial localization: Specificity for orientation, position, and context. Journal of Neurophysiology, 78, 2889–2894. [PubMed] [Article] [PubMed]
Desimone, R. Albright, T. D. Gross, C. G. Bruce, C. (1984). Stimulus-selective properties of inferior temporal neurons in the macaque. Journal of Neuroscience, 4, 2051–2062. [PubMed] [Article] [PubMed]
Fahle, M. (2004). Perceptual learning: A case for early selection. Journal of Vision, 4, (10):4, 879–890, http://journalofvision.org/4/10/4/, doi:10.1167/4.10.4. [PubMed] [Article] [CrossRef]
Fahle, M. Edelman, S. Poggio, T. (1995). Fast perceptual learning in hyperacuity. Vision Research, 35, 3003–3013. [PubMed] [CrossRef] [PubMed]
Fahle, M. Morgan, M. (1996). No transfer of perceptual learning between similar stimuli in the same retinal position. Current Biology, 6, 292–297. [PubMed] [Article] [CrossRef] [PubMed]
Fiorentini, A. Berardi, N. (1981). Learning in grating waveform discrimination: Specificity for orientation and spatial frequency. Vision Research, 21, 1149–1158. [PubMed] [CrossRef] [PubMed]
Furmanski, C. S. Engel, S. A. (2000). Perceptual learning in object recognition: Object specificity and size invariance. Vision Research, 40, 473–484. [PubMed] [CrossRef] [PubMed]
Gilbert, C. D. (1994). Early perceptual learning. Proceedings of the National Academy of Sciences of the Unites States of America, 91, 1195–1197. [PubMed] [Article] [CrossRef]
Gold, J. Bennett, P. J. Sekuler, A. B. (1999). Signal but not noise changes with perceptual learning. Nature, 402, 176–178. [PubMed] [CrossRef] [PubMed]
Gold, J. M. Sekuler, A. B. Bennett, P. J. (2004). Characterizing perceptual learning with external noise. Cognitive Science, 28, 167–207. [CrossRef]
Grieco, A. Casco, C. Roncato, S. (2006). Texture segregation on the basis of contrast polarity of odd-symmetric filters. Vision Research, 46, 3526–3536. [PubMed] [CrossRef] [PubMed]
Husk, J. S. Bennett, P. J. Sekuler, A. B. (2007). Inverting houses and textures: Investigating the characteristics of learned inversion effects. Vision Research, 47, 3350–3359. [PubMed] [CrossRef] [PubMed]
Hussain, Z. Bennett, P. J. Sekuler, A. B. (2005). Perceptual learning of faces and textures is tuned to trained identities. Perception: ECVP Abstract Supplement, 34,
Hussain, Z. Bennett, P. J. Sekuler, A. B. (2006). Journal of Vision 6. (6): 153, 153a, httpp
Ito, M. Fujita, I. Tamura, H. Tanaka, K. (1994). Processing on contrast polarity of visual images in inferotemporal cortex of the macaque monkey. Cerebral Cortex, 5, 499–508. [PubMed] [CrossRef]
Karni, A. Bertini, G. (1997). Learning perceptual skills: Behavioral probes into adult cortical plasticity. Current Opinion Neurobiology, 7, 530–535. [PubMed] [CrossRef]
Karni, A. Sagi, D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences of the Unites States of America, 88, 4966–4970. [PubMed] [Article] [CrossRef]
Logothetis, N. K. Pauls, J. Poggio, T. (1995). Shape representation in the inferior temporal cortex of monkeys. Current Biology, 5, 552–563. [PubMed] [Article] [CrossRef] [PubMed]
Matthews, N. Liu, Z. Geesaman, B. J. Qian, N. (1999). Perceptual learning on orientation and direction discrimination. Vision Research, 39, 3692–3701. [PubMed] [CrossRef] [PubMed]
Nederhouser, M. Yue, X. Mangini, M. C. Biederman, I. (2007). The deleterious effect of contrast reversal on recognition is unique to faces, not objects. Vision Research, 47, 2134–2142. [PubMed] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Pourtois, G. Rauss, K. S. Vuilleumier, P. Schwarz, S. (2008). Effects of perceptual learning on primary visual cortex activity in humans. Vision Research, 48, 55–62. [PubMed] [CrossRef] [PubMed]
Sáry, G. Vogels, R. Kovács, G. Orban, G. A. (1995). Responses of monkey inferior temporal neurons to luminance-, motion-, and texture-defined gratings. Journal of Neurophysiology, 73, 1341–1354. [PubMed] [PubMed]
Schoups, A. A. Vogels, R. Orban, G. A. (1995). Human perceptual learning in identifying the oblique orientation: Retinotopy, orientation specificity and monocularity. The Journal of Physiology, 483, 797–810. [PubMed] [Article] [CrossRef] [PubMed]
Sowden, P. T. Rose, D. Davies, I. R. (2002). Perceptual learning of luminance contrast detection: Specific for spatial frequency and retinal location but not orientation. Vision Research, 42, 1249–1258. [PubMed] [CrossRef] [PubMed]
Tanaka, K. Saito, H. Fukada, Y. Moriya, M. (1991). Coding visual images of objects in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, 66, 170–189. [PubMed] [PubMed]
Tyler, C. W. Liu, H. C. L. McBride, B. Kontsevich, L. (1992). Bit-stealing: How to get 1786 or more gray levels for an 8-bit color monitorn Proceedings of SPIE: Human Vision, Visual Processing and Digital Display III (vol. 1666, 351–364).
Zhou, H. Friedman, H. S. von der Heydt, R. (2000). Coding of border ownership in monkey visual cortex. Journal of Neuroscience, 20, 6594–6611. [PubMed] [Article] [PubMed]
Figure 1
 
Examples of the texture stimuli. Each texture was created by applying an isotropic, band-pass (2–4 cy/image) ideal spatial frequency filter to Gaussian white noise. The first row shows five of the ten textures from Set A. The second row shows the same five textures reversed in contrast. The third row shows the textures from the first row after being rotated by 180 deg.
Figure 1
 
Examples of the texture stimuli. Each texture was created by applying an isotropic, band-pass (2–4 cy/image) ideal spatial frequency filter to Gaussian white noise. The first row shows five of the ten textures from Set A. The second row shows the same five textures reversed in contrast. The third row shows the textures from the first row after being rotated by 180 deg.
Figure 2
 
Proportion correct on Day 1 and Day 2 for each of the four groups.
Figure 2
 
Proportion correct on Day 1 and Day 2 for each of the four groups.
Figure 3
 
Time-course of learning on Day 1 and Day 2 for the four groups. Proportion correct is calculated at eight bins within each session, with each bin comprising 105 trials.
Figure 3
 
Time-course of learning on Day 1 and Day 2 for the four groups. Proportion correct is calculated at eight bins within each session, with each bin comprising 105 trials.
Figure 4
 
Response accuracy for individual stimuli during Days 1 and 2. (a) Circles: Same textures group, (b) Triangles: Contrast-Reversal group, (c) Squares: Rotation group. Each point shows accuracy for a given texture, averaged across seven subjects. The dashed line in each panel represents equal performance in both sessions; the solid line is the best-fitting (least-squares) line. The Same group is the only group for which all points are above the dashed line: response accuracy for every texture increased across sessions. However, all groups show a significant positive correlation between Days 1 and 2.
Figure 4
 
Response accuracy for individual stimuli during Days 1 and 2. (a) Circles: Same textures group, (b) Triangles: Contrast-Reversal group, (c) Squares: Rotation group. Each point shows accuracy for a given texture, averaged across seven subjects. The dashed line in each panel represents equal performance in both sessions; the solid line is the best-fitting (least-squares) line. The Same group is the only group for which all points are above the dashed line: response accuracy for every texture increased across sessions. However, all groups show a significant positive correlation between Days 1 and 2.
Figure 5
 
Effect of prior instructions on the drop in performance across days. Top panel: Difference between average accuracy on Day 2 and Day 1. Bottom panel: Difference between accuracy at Bin 9 and Bin 8. Data are shown for the Same group, and the three groups tested each in the Contrast-reversal and the Rotation conditions. Error bars represent 1 standard error.
Figure 5
 
Effect of prior instructions on the drop in performance across days. Top panel: Difference between average accuracy on Day 2 and Day 1. Bottom panel: Difference between accuracy at Bin 9 and Bin 8. Data are shown for the Same group, and the three groups tested each in the Contrast-reversal and the Rotation conditions. Error bars represent 1 standard error.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×