December 2022
Volume 22, Issue 13
Open Access
Article  |   December 2022
Configuration perceptual learning and its relationship with element perceptual learning
Author Affiliations
  • Xizi Gong
    School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
    IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
    gongxizi0730@pku.edu.cn
  • Qian Wang
    School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
    IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
    wangqianpsy@pku.edu.cn
  • Fang Fang
    School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
    IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
    Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, People's Republic of China
    Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
    ffang@pku.edu.cn
Journal of Vision December 2022, Vol.22, 2. doi:https://doi.org/10.1167/jov.22.13.2
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xizi Gong, Qian Wang, Fang Fang; Configuration perceptual learning and its relationship with element perceptual learning. Journal of Vision 2022;22(13):2. https://doi.org/10.1167/jov.22.13.2.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual perceptual learning has been studied extensively and reported to enhance the perception of almost all types of training stimuli, from low- to high-level visual stimuli. Notably, high-level stimuli are often composed of multiple low-level features. Therefore, it is natural to ask whether training of high-level stimuli affects the perception of low-level stimuli and vice versa. In the present study, we trained subjects with either a high-level configuration stimulus or a low-level element stimulus. The high-level configuration stimulus consisted of two Gabors in the left and right visual fields, respectively, and the low-level element stimulus was the Gabor in the right visual field of the configuration stimulus. We measured the perceptual learning effects using the configuration stimulus and the element stimuli in both left and right visual fields. We found that the configuration perceptual learning equally improved the perception of the configuration stimulus and both element stimuli. In contrast, the element perceptual learning was confined to the trained element stimulus. These findings demonstrate an asymmetric relationship between perceptual learning of the configuration and the element stimuli and suggest a hybrid mechanism of the configuration perceptual learning. Our findings also offer a promising paradigm to promote the efficiency of perceptual learning—that is, gaining more learning effect with less training time.

Introduction
Repeated training with a visual task can lead to a long-term behavioral performance boost, a phenomenon known as visual perceptual learning (Dosher & Lu, 2016; Watanabe & Sasaki, 2015). Visual perceptual learning can occur at both low- and high-levels of the visual processing hierarchy in the brain (Op de Beeck & Baker, 2010; Watanabe & Sasaki, 2015). For example, many studies have shown that training remarkably enhances visual abilities to detect or discriminate low-level visual features such as contrast (Dorais & Sagi, 1997; Yu, Zhang, Qiu, & Fang, 2016), orientation (Schoups, Vogels, & Orban, 1995), spatial frequency (Fiorentini & Berardi, 1980), spatial phase (Berardi & Fiorentini, 1987), and hyperacuity (Fahle & Edelman, 1993). Similarly, recognition and discrimination of high-level visual stimuli, such as shape (Kourtzi, Betts, Sarkheil, & Welchman, 2005), object (Furmanski & Engel, 2000; Sigman & Gilbert, 2000), and face (Bi, Chen, Weng, He, & Fang, 2010), can be substantially improved by training, as well. One prominent characteristic shared by perceptual learning of both low-level features and high-level stimuli is that the training-induced behavioral improvement is more or less specific to the trained features or stimuli (Bi et al., 2010; Furmanski & Engel, 2000; Gilbert & Li, 2012; Karni & Sagi, 1991; Poggio, Fahle, & Edelman, 1992; Shiu & Pashler, 1992), although the latter one often exhibits more tolerance to low-level property changes (Bi & Fang, 2013). For example, perceptual learning of contrast detection and motion direction discrimination at one retinal location showed little or partial transfer to other locations (Ball & Sekuler, 1982). In contrast, perceptual learning of object recognition and face view discrimination has been reported to exhibit complete transfer across retinal locations and stimulus sizes (Bi et al., 2010; Furmanski & Engel, 2000). 
Over the past decades, numerous studies have explored the neural mechanisms underlying perceptual learning and revealed various types of training-induced neural changes throughout the brain (Dosher & Lu, 2016; Li, 2016; Watanabe & Sasaki, 2015). Inspired by the behavioral specificities, researchers found that training-induced modifications occurred in visual areas that are functionally specialized for trained stimuli (Maniglia & Seitz, 2018; Sagi, 2011). These modifications manifested in many different forms, including cortical response augmentation (Furmanski, Schluppeck, & Engel, 2004; Lu, Luo, Wang, Fang, & Chen, 2020; Schwartz, Maquet, & Frith, 2002; Song, Hu, Li, Li, & Liu, 2010; Yotsumoto, Watanabe, & Sasaki, 2008; Yu et al., 2016), neural selectivity enhancement (Op de Beeck, Baker, DiCarlo, & Kanwisher, 2006; Bi, Chen, Zhou, He, & Fang, 2014; Jehee, Ling, Swisher, van Bergen, & Tong, 2012; Kuai, Levi, & Kourtzi, 2013; Schoups, Vogels, Qian, & Orban, 2001), noise correlation reduction (Adab & Vogels, 2011; Bejjanki, Beck, Lu, & Pouget, 2011; Gu et al., 2011) and so on. Intriguingly, training can even dramatically alter the functional specializations of visual areas by shifting stimulus representations to different visual areas after perceptual learning (Chang, Mevorach, Kourtzi, & Welchman, 2014; Chen, Cai, Zhou, Thompson, & Fang, 2016; Chowdhury & DeAngelis, 2008). It should be noted that training-induced modifications are not restricted to the functionally specialized visual areas for the trained stimuli. Attention- and decision-making–related areas involved in perceptual learning have also been identified (Gilbert & Li, 2012; Law & Gold, 2010; Maniglia & Seitz, 2018; Watanabe & Sasaki, 2015). Specifically, researchers have found enhanced selectivity or attenuated neural responses in the frontoparietal areas associated with attention and decision-making (e.g., intraparietal sulcus, anterior cingulate cortex) (Kahnt, Grueschow, Speck, & Haynes, 2011; Law & Gold, 2008; Lewis, Baldassarre, Committeri, Romani, & Corbetta, 2009), as well as increased functional connectivity between decision-making areas (e.g., intraparietal sulcus) and visual areas (e.g., V3A) (Chen et al., 2016; Law & Gold, 2009; Lewis et al., 2009). Recently, it has been proposed that all of the aforementioned modifications could reflect different aspects of the underlying mechanisms and contribute collaboratively to the behavioral effects of perceptual learning (Ahmadi, McDevitt, Silver, & Mednick, 2018; Jing, Yang, Huang, & Li, 2021; Maniglia & Seitz, 2018). 
High-level visual stimuli (e.g., shapes) are composed of multiple low-level features (e.g., edges), and local elements are organized to form a global configuration (Kubilius, Baeck, Wagemans, & Op de Beeck, 2015; Sripati & Olson, 2010; Ullman, 2007). Taking a hierarchical Navon stimulus as an example, small and local letters are organized and configured to form a large and global letter (Navon, 1977). The processing of a global configuration often affects the processing of its local elements, manifested as increasing the response time to detect local elements (Bouhassoun, Poirel, Hamlin, & Doucet, 2022; Gerlach & Poirel, 2020), mitigating the tilt aftereffect of local elements (He, Kersten, & Fang, 2012) or decreasing the activity evoked by local elements in early visual areas (Fang, Kersten, & Murray, 2008; Stoll, Finlayson, & Schwarzkopf, 2020). In our study, a stimulus composed of multiple elements is referred to as a configuration stimulus (see Figure 1A). 
Figure 1.
 
Stimuli and experimental protocols in Experiments 1 and 2. (A) Schematic descriptions of trials in the angle discrimination task (red bar), the orientation discrimination task in the right visual field (blue bar), and the orientation discrimination task in the left visual field (gray bar). (B) Experimental protocols for the configuration training group and the element training group in Experiment 1 (7-day training) and Experiment 2 (2-day training). At Pre and Post, the three tasks were counterbalanced across subjects.
Figure 1.
 
Stimuli and experimental protocols in Experiments 1 and 2. (A) Schematic descriptions of trials in the angle discrimination task (red bar), the orientation discrimination task in the right visual field (blue bar), and the orientation discrimination task in the left visual field (gray bar). (B) Experimental protocols for the configuration training group and the element training group in Experiment 1 (7-day training) and Experiment 2 (2-day training). At Pre and Post, the three tasks were counterbalanced across subjects.
Given these findings, perceptual learning of high-level configuration stimuli and their elements might be closely intertwined. However, to date, almost all perceptual learning studies have utilized training and test stimuli in the same characteristic dimension (e.g., location, orientation, motion direction, face identity). Notably, there are several studies suggesting the importance of learning local elements for perceptual learning of high-level stimuli. In object perceptual learning, for example, improved recognition and enhanced selectivity have been reported for untrained objects that shared elements with trained ones (Baker, Behrmann, & Olson, 2002; Gölcü & Gilbert, 2009). It remains unclear whether training with high-level configuration stimuli would affect the perception of its elements and vice versa. 
To address this issue, here we designed two tasks, an angle discrimination task and an orientation discrimination task, in which high-level configuration and low-level element stimuli were used, respectively. In the angle discrimination task, the configuration stimuli consisted of two Gabors presented in the left and right visual fields, respectively. Subjects were required to integrate the two Gabors and discriminate the angle formed by the two Gabors. This task engaged a cortical stage at least beyond V4, including the lateral occipital complex or temporal–occipital cortex, where the neuronal receptive fields cover both the left and right visual hemifields (Amano, Wandell, & Dumoulin, 2009; Dumoulin & Wandell, 2008). In the orientation discrimination task, subjects were presented with only one Gabor in the left or right visual field (i.e., an element of the configuration stimulus) and were instructed to make an orientation discrimination judgment with the Gabor. In our experiments, we randomly assigned subjects into two groups trained with either the angle discrimination task or the orientation discrimination task. For the sake of simplicity, we referred to the two training groups as the configuration training group and the element training group. In Experiment 1, we trained subjects for 7 days to investigate how perceptual learning of the high-level configuration stimulus would affect the perception of the low-level element stimulus and vice versa. In Experiment 2, we explored the time courses of the configuration training and the element training by introducing a 2-day training protocol. Finally, in Experiment 3, we further investigated whether the element training in both visual fields could lead to a behavioral improvement in the angle discrimination task. 
Methods
Participants
Sixty healthy subjects participated in the current study (Experiment 1: n = 24, eight males, 18–28 years old; Experiment 2: n = 24, four males, 18–27 years old; Experiment 3: n = 12, six males, 19–29 years old). All subjects were right handed, had reported normal or corrected-to-normal vision, and had no known neurological or visual disorders. They were naïve to the purposes of the study and had no prior experience with any perceptual learning experiment. All subjects gave written, informed consent in accordance with the procedures and protocols approved by the human subject review committee of Peking University. This study adhered to the tenets of the Declaration of Helsinki. 
Apparatus
All experiments were conducted in a quiet and dimly lit environment. Visual stimuli were generated using MATLAB 7.0 (MathWorks, Natick, MA) with Psychotoolbox-3 extensions (Brainard, 1997; Pelli, 1997). All stimuli were presented on a 21-inch Trinitron monitor (1024 × 768 spatial resolution, 60-Hz refresh rate; Sony, Tokyo, Japan) with a gray background (mean luminance, 49.98 or 40.65 cd/m2). The output luminance of the monitor was linearized using a look-up table in conjunction with photometric readings from a colorCAL colorimeter (Cambridge Research System, Kent, UK). Subjects viewed the stimuli from a fixed distance of 68 cm. Their head position was stabilized with a chin rest and a forehead bar. 
Stimuli and tasks
Gabor patches with a randomized phase (radius, 1.5°; spatial frequency, 3 c/°; contrast, 1.0; eccentricity, 6°; sigma, 0.4943) (Figure 1A) were used in all three experiments. We designed two behavioral tasks: an angle discrimination task and an orientation discrimination task. When subjects performed the tasks, they were instructed to maintain their gaze on a central fixation dot. 
For the angle discrimination task, two configuration stimuli were each presented for 200 ms and were separated by a 700-ms blank interval. A configuration stimulus consisted of two Gabor patches that were simultaneously presented in the left and right visual fields, respectively (Figure 1A). Subjects were instructed to integrate the orientations of the two Gabor patches and pay attention to the relative angle between them. The orientations of the two Gabor patches were fixed in one configuration stimulus (left Gabor, −60°, θL; right Gabor, +30°, θR; the + and − signs indicate a clockwise and counterclockwise rotation, respectively, relative to the vertical axis). In the other configuration stimulus, the Gabor patches rotated around the fixed orientations (left Gabor, θL + ΔθL; right Gabor, θR + ΔθR; ΔθL and ΔθR could be ±1°, ±3°, ±4°, ±6°, or ±8°). The temporal order of the two configuration stimuli was randomized. Subjects needed to compare the relative angle in the first configuration stimulus with that in the second one and make a two-alternative forced-choice (2-AFC) judgment to indicate which stimulus contained a larger angle by pressing one of two keys. The angle difference (ΔθR – ΔθL) between the two configuration stimuli was drawn from a predetermined set of 2°, 3°, 4°, 5°, 7°, 9°, and 12°. 
For the orientation discrimination task, subjects were presented with only the left or right half of the configuration stimuli (i.e., one Gabor patch in the left or right visual field) (Figure 1A). In a trial, two Gabor patches with orientations of θ° (left Gabor, −60°; right Gabor, +30°) and θ + Δθ° (Δθ = ±1°, ±3°, ±4°, ±6°, or ±8°) were each presented for 200 ms and were separated by a 700-ms blank interval (Figure 1A). Their temporal order was also randomized. Subjects were instructed to make a 2-AFC judgment about the rotation direction (clockwise or counterclockwise) of the second Gabor patch relative to the first one by pressing one of two keys. 
Training and test procedures
All three experiments consisted of three phases: pre-training test (Pre), discrimination training (Training), and post-training test (Post) (Figure 1B). These experiments had the same test procedure but different training procedures. During the test phases of the experiments, we measured subjects’ thresholds for the angle discrimination task, the orientation discrimination task in the right visual field, and the orientation discrimination task in the left visual field using the method of constant stimuli (Figure 1B). The measurement consisted of a ten-block test for the angle discrimination task and two six-block tests for the orientation discrimination task in the right visual field and in the left visual field, respectively. Each block contained 82 trials. The three tests were counterbalanced across subjects. Before the test phases, subjects practiced 20 trials per task, and feedback was provided to make sure that they fully understood the tasks. 
In Experiment 1, subjects were randomly assigned to either the configuration training group or the element training group (n = 12 in each group). All subjects were trained with feedback for seven consecutive daily sessions. In the configuration training group, subjects were trained with the angle discrimination task. In the element training group, subjects were trained with the orientation discrimination task in the right visual field. Each daily training session consisted of 10 blocks of 82 trials (∼50 minutes); therefore, subjects practiced a total of 5740 trials in the training phase. The amount of training was the same for the two training groups. 
In Experiment 2, subjects were recruited into either the configuration training group or the element training group (n = 12 in each group). Experiment 2 had the same protocol as that in Experiment 1, except that subjects in Experiment 2 only underwent two daily training sessions (1640 trials in total) (Figure 1B). 
In Experiment 3, 12 subjects underwent 2-day interleaved element training. Each daily training session consisted of 20 blocks of 82 trials. In a session, subjects completed training blocks with the orientation discrimination task in the left and right visual fields alternately (see Figure 4A). The sequences of training blocks were counterbalanced across subjects. In total, subjects practiced the orientation discrimination task in the right visual field for 1640 trials and in the left visual field for 1640 trials. Therefore, subjects experienced the same number of trials in each visual field as that in the configuration training group in Experiment 2. In Experiment 3, subjects’ eye movements were monitored using an EyeLink 1000 Plus eye tracker (SR Research, Ottawa, ON, Canada). Eye movement data showed that subjects could maintain stable fixation across tasks, and most of their fixation positions were within 1° from the fixation point. 
Statistical analyses
For the Pre and Post phases, all discrimination thresholds were estimated using the method of constant stimuli at 75% correct. Subjects’ improvement in a task was calculated as (pre-training threshold − post-training threshold)/pre-training threshold × 100%. During the training phase, data from all blocks in each daily training session were pooled together to estimate the threshold. Then the thresholds were plotted as a function of the training day. 
Discrimination thresholds and improvements were further analyzed using mixed-design analyses of variance (ANOVAs) in SPSS Statistics 20.0 (IBM, Chicago, IL). In t-tests, Bonferroni correction was used to control the false discovery rate for multiple comparisons (Bonferroni-corrected level = 0.05/3). For ANOVAs, \({\rm{\eta }}_p^2\) was computed as a measure of effect size. For t-tests, Cohen's d was computed as a measure of effect size. For nonsignificant results of t-tests, Bayesian analyses were further performed to quantify the relative strength of two competing hypotheses (e.g., a null hypothesis and an alternative hypothesis) (van Doorn et al., 2021). In particular, a non-overlapping hypothesis Bayes factor (BFNOH) (Linde, Tendeiro, Selker, Wagenmakers, & Ravenzwaaij, 2021; Morey & Rouder, 2011) was calculated to evaluate the equivalence between two conditions with relatively small sample sizes. A BFNOH more than 1 could be interpreted as evidence for equivalence. JASP 0.16.3 was used to perform the Bayesian analyses. 
Results
Experiment 1
In Experiment 1, we designed a configuration training protocol and an element training protocol. We trained subjects for 7 days to investigate how perceptual learning of the high-level configuration stimulus would affect the perception of the low-level element stimulus and vice versa. We first examined the perceptual learning effects on the angle and the orientation discrimination performance in the configuration training group after 7-day training of the angle discrimination task. During training, subjects’ angle discrimination thresholds decreased gradually, and most of the improvement occurred within the first 4 days (Figure 2A). After training, the group-averaged angle discrimination threshold at Post (mean ± SEM, 3.65° ± 0.16°) was significantly lower than that at Pre (7.67° ± 0.75°), t(11) = 5.705, padj < 0.001, Cohen’s d = 1.647 (Figure 2B). Meanwhile, the subjects’ discrimination thresholds in the orientation discrimination task also significantly decreased in the right visual field (Pre, 4.15° ± 0.49°; Post, 1.86° ± 0.16°), t(11) = 5.942, padj < 0.001, Cohen’s d = 1.715, and in the left visual field (Pre, 3.52° ± 0.32°; Post, 1.85° ± 0.11°), t(11) = 6.228, padj < 0.001, Cohen’s d = 1.798. A repeated-measures ANOVA showed no significant difference among the performance improvements in the three discrimination tasks (angle discrimination task, 48.70%; orientation discrimination task in the right visual field, 52.54%; orientation discrimination task in the left visual field, 44.37%), F(2, 22) =1.271, p > 0.05, \({\rm{\eta }}_p^2\) = 0.104 (Figure 2C). The Bayesian analyses also support that the performance improvement in the angle discrimination task was equivalent to the improvements in the orientation discrimination task in the right visual field (BFNOH = 3.054) and in the left visual field (BFNOH = 3.077). These results demonstrate that the 7-day configuration training could equally improve the performance in all three discrimination tasks. 
Figure 2.
 
Results of Experiment 1. (A) Learning curves for the configuration training group and the element training group. Discrimination thresholds are plotted as a function of training day. (B) Discrimination thresholds for the angle discrimination task, the orientation discrimination task in the right visual field, and the orientation discrimination task in the left visual field measured at Pre and Post. (C) Improvements in angle and orientation discrimination performance for the two training groups at Post, relative to Pre (***p < 0.001, **p < 0.01, *p < 0.05). Error bars denote 1 SEM across subjects.
Figure 2.
 
Results of Experiment 1. (A) Learning curves for the configuration training group and the element training group. Discrimination thresholds are plotted as a function of training day. (B) Discrimination thresholds for the angle discrimination task, the orientation discrimination task in the right visual field, and the orientation discrimination task in the left visual field measured at Pre and Post. (C) Improvements in angle and orientation discrimination performance for the two training groups at Post, relative to Pre (***p < 0.001, **p < 0.01, *p < 0.05). Error bars denote 1 SEM across subjects.
We then examined the perceptual learning effects on the angle and the orientation discrimination performance in the element training group after 7-day training of the orientation discrimination task in the right visual field. During training, subjects’ orientation discrimination thresholds decreased gradually, and most of the improvement occurred within the first 4 days (Figure 2A). After training, the group-averaged orientation discrimination threshold at Post (1.74° ± 0.15°) was significantly lower than that at Pre (4.03° ± 0.54°), t(11) = 5.182, padj < 0.001, Cohen’s d = 1.496 (Figure 2B). The discrimination thresholds of the orientation discrimination task in the left visual field (Pre, 3.68° ± 0.33°; Post, 2.82° ± 0.29°), t(11) = 4.365, padj < 0.01, Cohen’s d = 1.260, and the angle discrimination task also significantly decreased (Pre, 7.43° ± 0.67°; Post, 5.40° ± 0.40°), t(11) = 3.888, padj < 0.01, Cohen’s d = 1.123. However, a repeated-measures ANOVA found a significant main effect among the improvements in the three discrimination tasks (angle discrimination task, 24.84%; orientation discrimination task in the right visual field, 53.72%; orientation discrimination task in the left visual field, 23.36%), F(2, 22) = 18.799, p < 0.001, \({\rm{\eta }}_p^2\) = 0.631 (Figure 2D). Post hoc t-tests showed that the improvement in the orientation discrimination task in the right visual field was significantly higher than that in the angle discrimination task, t(11) = 6.893, padj < 0.001, Cohen’s d = 1.990, and that in the orientation discrimination task in the left visual field, t(11) = 5.608, padj < 0.001, Cohen’s d = 1.619 (Figure 2C). These results demonstrate that, after 7-day element training, the trained task exhibited more performance improvement than the other two untrained tasks, which is in stark contrast to the complete transfer of the learning effect in the configuration training group. 
To directly evaluate the differences between the configuration and the element training groups, we applied mixed-design ANOVAs with task (the orientation discrimination tasks in the right visual field and in the left visual field, and the angle discrimination task) as a within-subject factor and group (configuration training group and element training group) as a between-subject factor. At Pre, no significant group effect was found with the discrimination thresholds: F(1, 22) = 0.012, p > 0.05 (Figure 2B). However, at Post, an ANOVA applied to the improvements showed that the main effect of task, F(2, 44) = 15.020, p < 0.001, \({\rm{\eta }}_p^2\) = 0.406; the main effect of group, F(1, 22) = 12.545, p = 0.002, \({\rm{\eta }}_p^2\) = 0.363; and the interaction between task and group, F(2, 44) = 6.547, p = 0.003, \({\rm{\eta }}_p^2\) = 0.229, were all significant. Post hoc t-tests revealed that the configuration training group showed significantly larger improvements than the element training group in both the orientation discrimination task in the left visual field, t(22) = 3.308, padj = 0.010, Cohen's d = 1.351, and the angle discrimination task, t(22) = 3.668, padj = 0.004, Cohen's d = 1.498, but not in the orientation discrimination task in the right visual field, t(22) = 0.235, padj > 0.05, BFNOH = 2.836) (Figure 2C). 
Experiment 2
In Experiment 1, we found that configuration training led to significant configuration and element learning effects, and the two learning effects were comparable. However, the time courses of the configuration learning and the element learning remain unclear. Specifically, for the configuration training group, most of the angle discrimination performance improvement took place during the first 4 training days. Therefore, it is possible that subjects in the configuration training group might improve their angle discrimination skill (i.e., configuration learning) during the early training phase, followed by the enhancement of their orientation discrimination skill (i.e., element learning) during the late training phase. To investigate this issue, in Experiment 2 we trained subjects for only 2 days to probe the learning effects in the early training phase. 
For the configuration training group, after the 2-day training, subjects’ angle discrimination thresholds significantly decreased (Pre, 7.13° ± 0.52°; Post, 4.92° ± 0.26°), t(11) = 5.127, padj < 0.001, Cohen’s d = 1.480. The orientation discrimination thresholds also significantly decreased in the right visual field (Pre, 3.81° ± 0.29°; Post, 2.14° ± 0.11°), t(11) = 7.007, padj < 0.001, Cohen’s d = 2.023, and in the left visual field (Pre, 3.51° ± 0.36°; Post, 2.01° ± 0.11°), t(11) = 4.673, padj = 0.001, Cohen’s d = 1.349. 
For the element training group, subjects’ orientation discrimination thresholds in the right visual field significantly decreased after 2-day training (Pre, 3.68° ± 0.29°; Post, 2.34° ± 0.23°), t(11) = 10.146, padj < 0.001, Cohen’s d = 2.929. The thresholds also significantly decreased for the orientation discrimination task in the left visual field (Pre, 3.48° ± 0.49°; Post, 2.51° ± 0.28°), t(11) = 2.790, padj = 0.032, Cohen’s d = 0.774, and for the angle discrimination task (Pre, 7.13° ± 0.63°; Post, 5.29° ± 0.51°), t(11) = 4.083, padj = 0.003, Cohen’s d = 1.179. 
To further explore the time course of both configuration learning and element learning, we applied mixed-design ANOVAs with task (the orientation discrimination tasks in the right visual field and in the left visual field and the angle discrimination task) as a within-subject factor and training day (2-day training and 7-day training) as a between-subject factor (Figure 3A). For the configuration training group, a significant main effect of training day was observed, F(1, 22) = 10.990, p = 0.003, \({\rm{\eta }}_p^2\) = 0.333; however, the main effect of task, F(2, 44) = 2.741, p > 0.05, and the interaction between task and training day, F(2, 44) = 2.061, p > 0.05, were not significant. Post hoc t-tests revealed that the improvement in the angle discrimination task after the 7-day training was significantly higher than that after the 2-day training, t(22) = 3.376, padj = 0.009, Cohen’s d = 1.379. Surprisingly, no significant improvement difference between the 2-day and 7-day training was found in the two orientation discrimination tasks (both t < 2.260, padj > 0.05). Note that Bayesian analyses yielded support for equivalent improvements in the orientation discrimination task in the left visual field (BFNOH = 2.123) but not in the right visual field (BFNOH = 0.459). These results suggest that the element learning might take place at an early phase, and even after the element learning has already saturated the configuration learning continues. 
Figure 3.
 
Results of Experiment 2 (2-day training); the results of Experiment 1 (7-day training) are presented here for comparison purposes. (A) Improvements in angle and orientation discrimination performance for the configuration training groups in Experiment 1 and Experiment 2 at Post, relative to Pre. (B) Improvements in angle and orientation discrimination performance for the element training groups in Experiment 1 and Experiment 2 at Post, relative to Pre (**p < 0.01). Error bars denote 1 SEM across subjects.
Figure 3.
 
Results of Experiment 2 (2-day training); the results of Experiment 1 (7-day training) are presented here for comparison purposes. (A) Improvements in angle and orientation discrimination performance for the configuration training groups in Experiment 1 and Experiment 2 at Post, relative to Pre. (B) Improvements in angle and orientation discrimination performance for the element training groups in Experiment 1 and Experiment 2 at Post, relative to Pre (**p < 0.01). Error bars denote 1 SEM across subjects.
For the element training groups, a significant main effect of task, F(2, 44) = 18.085, p < 0.001, \({\rm{\eta }}_p^2\) = 0.451, was observed; however, the main effect of training day (7-day and 2-day training), F(1, 22) = 1.620, p > 0.05, and the interaction between task and training day, F(2, 44) = 2.552, p > 0.05, were not significant. Post hoc t-tests showed that the improvement in the orientation discrimination task in the right visual field after 7-day training was significantly higher than that after 2-day training, t(22) = 3.401, padj = 0.009, Cohen’s d = 1.388, but not in either the orientation discrimination task in the left visual field or the angle discrimination task (both t < 0.138, padj > 0.05). Bayesian analyses yielded support for equivalent improvements in the orientation discrimination task in the left visual field (BFNOH = 2.878) and in the angle discrimination task (BFNOH = 2.901). These results demonstrate that the element learning was relatively confined to the trained task. 
Experiment 3
In Experiment 2, we found that even at an early phase, the configuration training remarkably improved the performance in the orientation discrimination task in both the left and right visual fields. This raises a new possibility that, if subjects’ orientation discrimination skills improve, their performance in the angle discrimination task would improve naturally without practicing this task. In other words, the integration of the two elements may not be necessary for the observed improvement in the angle discrimination task. To this end, in Experiment 3, we trained subjects to perform the orientation discrimination task in the left and right visual fields in alternating blocks that would only recruit the element learning process. Further, the number of stimuli presented in each visual field was the same as that for the configuration training group in Experiment 2. Hence, if the configuration learning is simply due to the element learning, we would expect that the interleaved element training could lead to performance improvement in the angle discrimination task similar to that for the configuration training group in Experiment 2 (Figure 4A). 
Figure 4.
 
Experimental protocol and results of Experiment 3. Results of Experiment 2 (2-day training) are presented here for comparison purposes. (A) Experimental protocol. On each training day, subjects were trained with the orientation discrimination task in the left and right visual fields alternately. At Pre and Post, three tasks were counterbalanced across subjects. (B) Improvements in angle and orientation discrimination performance for the configuration training group in Experiment 2 and the interleaved element training group in Experiment 3 at Post, relative to Pre (*p < 0.05). Error bars denote 1 SEM across subjects.
Figure 4.
 
Experimental protocol and results of Experiment 3. Results of Experiment 2 (2-day training) are presented here for comparison purposes. (A) Experimental protocol. On each training day, subjects were trained with the orientation discrimination task in the left and right visual fields alternately. At Pre and Post, three tasks were counterbalanced across subjects. (B) Improvements in angle and orientation discrimination performance for the configuration training group in Experiment 2 and the interleaved element training group in Experiment 3 at Post, relative to Pre (*p < 0.05). Error bars denote 1 SEM across subjects.
After 2-day interleaved element training, subjects’ orientation discrimination thresholds significantly decreased in both visual fields: right visual field (Pre, 4.12° ± 0.33°; Post, 2.84° ± 0.18°; improvement, 27.63 %), t(11) = 4.141, padj = 0.003, Cohen’s d = 1.195; left visual field (Pre, 3.97° ± 0.44°; Post, 2.52° ± 0.12°; improvement, 28.36 %), t(11) = 3.150, padj = 0.014, Cohen’s d = 0.909. Intriguingly, no significant improvement was found in the angle discrimination task (Pre, 7.11° ± 0.42°; Post, 6.22° ± 0.49°; improvement, 12.08%), t(11) = 2.335, padj > 0.05, BFNOH = 1.673. 
Compared with the configuration training group in Experiment 2, no significant improvement difference was found in the orientation discrimination task in both visual fields (both t < 2.280, padj > 0.05) (Figure 4B). Note that Bayesian analyses yielded support for equivalent improvements in the orientation discrimination task in the left visual field (BFNOH = 1.755) but not in the right visual field (BFNOH = 0.445). However, the improvement in the angle discrimination task after the interleaved element training was significantly lower than that after the configuration training in Experiment 2, t(22) = 2.697, padj = 0.039, Cohen's d = 1.101. Together, these results demonstrate that the interleaved element training was not functionally equivalent to the configuration training, suggesting an essential role of integration in the configuration learning. 
Discussion
In the current study, we explored the relationship between perceptual learning of the configuration and element stimuli. In Experiment 1, we found that the configuration training equally improved the perception of the configuration and element stimuli. Moreover, the improvement for the element stimuli after the configuration training was equivalent to that after the element training. In contrast, relative to the configuration training, the element training improved the perception of the untrained element stimulus and the configuration stimulus to a much lesser extent, revealing an asymmetric transfer pattern between perceptual learning of the configuration and element stimuli. Regarding the complete configuration-to-element transfer, one possible explanation is that subjects learn to discriminate the element stimuli after their performance has reached a plateau for the configuration stimulus. In other words, element learning might follow configuration learning (Kattner, Cochrane, Cox, Gorman, & Green, 2017; Shibata et al., 2017; Yotsumoto, Watanabe, Chang, & Sasaki, 2013). To examine this possibility, in Experiment 2 we utilized the same experimental procedure as that of Experiment 1, except that subjects were trained for only 2 days, in which case configuration learning would not saturate according to the result of Experiment 1. Surprisingly, such short training also led to an asymmetric transfer pattern. As for the weak element-to-configuration transfer in Experiments 1 and 2, a possible explanation is that, in the element training, subjects were trained only in the right visual field. Therefore, in Experiment 3, subjects underwent element training in both visual fields. However, even with such training, little improvement in the perception of the configuration stimulus was found. Together, these results demonstrate an asymmetric relationship between perceptual learning of the configuration and element stimuli and a remarkable transfer ability of the configuration perceptual learning. 
In the field of perceptual learning, previous studies suggest that perceptual learning of a visual stimulus is built on the basis of learning its local elements (Gölcü & Gilbert, 2009; Nishina, Kawato, & Watanabe, 2009). For example, Gölcü and Gilbert (2009) found that transfer can occur between objects with shared elements, even though one of the objects has never been trained before. In line with these studies, our study confirmed the importance of local element learning in perceptual learning. However, a major difference between previous studies and ours is that we measured the transfers of the learning effects between the high-level configuration stimulus and its local elements. Rather than being embedded in a configuration, the element stimuli were tested in isolation. Here, we demonstrate that training with a configuration stimulus could improve the discrimination of its elements. Moreover, we show that element learning, despite its importance, cannot fully account for configuration learning. Although subjects showed improvements in the perception of the element stimuli in both visual fields, their perception of the configuration stimulus barely improved. 
Our findings give a hint of the underlying processes of configuration perceptual learning. When faced with the configuration stimulus, two processes occur in the visual system. One is identifying the orientation of each element, usually in early visual cortex. The other is integrating the two spatially separated orientations into one unified angle perception. The integration of element information has been found to be associated with high-level visual areas (e.g., lateral occipital complex) (Kourtzi, Tolias, Altmann, Augath, & Logothetis, 2003; Stoll et al., 2020) as well as their feedback connections with low-level visual areas (e.g., V1) (Fang et al., 2008; Liang, Gong, Chen, Yan, Li, & Gilbert, 2017; Stoll et al., 2020). Therefore, the perceptual enhancement induced by the configuration training could be driven by learning the element information, learning to integrate the element information, or both. For the first hypothesis, if subjects only learn the element information (Gölcü & Gilbert, 2009; Nishina et al., 2009), we would predict a complete bidirectional transfer between the configuration and element stimuli. Clearly, the results of Experiments 1 and 2 do not support this hypothesis. The second hypothesis is that subjects only learn to integrate the element information in the configuration perceptual learning. This hypothesis would predict no transfer in either direction, which is also at odds with our findings of the asymmetric transfers. In the perceptual learning literature, a unitary mechanism seems unlikely to account for all the empirical results (Dosher, Jeter, Liu, & Lu, 2013; Li, 2016; Maniglia & Seitz, 2018). Indeed, some recent perceptual learning studies have identified hybrid mechanisms even in a simple visual task learning (Ahmadi et al., 2018; Jing et al., 2021; Xi et al., 2020). Here, our results also support a hybrid hypothesis (i.e., the third hypothesis) that configuration perceptual learning incorporates both element learning and integration learning (or learning to integrate elements). The element learning supports the complete configuration-to-element transfer, whereas the integration learning sets a constraint on the element-to-configuration transfer. Interestingly, we also found the asymmetric transfers even when the configuration learning had not saturated (Experiment 2). This finding provides new insight into the time courses of the two mechanisms. In particular, the element learning may develop in companion with the integration learning in the early phase of the configuration training. 
Compared with other types of visual perceptual learning, one advantage of the configuration perceptual learning in our study is its efficiency (i.e., gain more learning effect with less training time). It has been a long-standing challenge in perceptual learning to maximize the efficiency of training paradigm, which would facilitate practical and clinical applications of perceptual learning (Huang et al., 2022; Lu, Lin, & Dosher, 2016). Previous studies have found that the efficiency of perceptual learning can be improved by adding pre-stimulus cues (Donovan & Carrasco, 2018; Donovan, Shen, Tortarolo, Barbot, & Carrasco, 2020), optimizing daily training amount (Amar-Halpert, Laor-Maayany, Nemni, Rosenblatt, & Censor, 2017; Song, Chen, & Fang, 2021), pairing training with transcranial magnetic/electric stimulation (Contemori, Trotter, Cottereau, & Maniglia, 2019; He, Yang, Gong, Bi, & Fang, 2022; Herpich, Melnick, Agosta, Huxlin, Tadin, & Battelli, 2019; Karim, Schler, Hegner, Friedel, & Godde, 2006), or taking medicines (Rokem & Silver, 2010). In our study, the configuration training improved performance equally in all three tasks: the angle discrimination task and the orientation discrimination task in the left and right visual fields. That being said, the configuration training underwent the same amount of training time as the element training but acquired much more learning effect. This suggests that training with configuration stimuli can increase the efficiency of perceptual learning, an observation that motivates further research into whether more complex stimuli (e.g., more elements in training stimuli) and tasks can induce even more broader learning effects. 
In sum, our findings reveal the asymmetric relationship between perceptual learning of the configuration and element stimuli and provide a promising paradigm to promote the efficiency of perceptual learning. Future studies should be carried out to explore the neural mechanisms underlying the configuration perceptual learning. 
Acknowledgments
Supported by grants from the National Science and Technology Innovation 2030 Major Program (2022ZD0204802, 2022ZD0204804) and the National Natural Science Foundation of China (31930053) and by the Beijing Academy of Artificial Intelligence. 
Commercial relationships: none. 
Corresponding author: Fang Fang. 
Email: ffang@pku.edu.cn. 
Address: School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China. 
References
Adab, H., & Vogels, R. (2011). Practicing coarse orientation discrimination improves orientation signals in macaque cortical area V4. Current Biology, 21(19), 1661–1666, https://doi.org/10.1016/j.cub.2011.08.037. [CrossRef]
Ahmadi, M., McDevitt, E. A., Silver, M. A., & Mednick, S. C. (2018). Perceptual learning induces changes in early and late visual evoked potentials. Vision Research, 152, 101–109, https://doi.org/10.1016/j.visres.2017.08.008. [CrossRef] [PubMed]
Amano, K., Wandell, B. A., & Dumoulin, S. O. (2009). Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex. Journal of Neurophysiology, 102(5), 2704–2718, https://doi.org/10.1152/jn.00102.2009. [CrossRef] [PubMed]
Amar-Halpert, R., Laor-Maayany, R., Nemni, S., Rosenblatt, J. D., & Censor, N. (2017). Memory reactivation improves visual perception. Nature Neuroscience, 20(10), 1325–1328, https://doi.org/10.1038/nn.4629. [CrossRef] [PubMed]
Baker, C. I., Behrmann, M., & Olson, C. R. (2002). Impact of learning on representation of parts and wholes in monkey inferotemporal cortex. Nature Neuroscience, 5(11), 1210–1216, https://doi.org/10.1038/nn960. [CrossRef] [PubMed]
Ball, K., & Sekuler, R. (1982). A specific and enduring improvement in visual motion discrimination. Science, 218(4573), 697–698, https://doi.org/10.1126/science.7134968. [CrossRef] [PubMed]
Bejjanki, V. R., Beck, J. M., Lu, Z.-L., & Pouget, A. (2011). Perceptual learning as improved probabilistic inference in early sensory areas. Nature Neuroscience, 14(5), 642–648, https://doi.org/10.1038/nn.2796. [CrossRef] [PubMed]
Berardi, N., & Fiorentini, A. (1987). Interhemispheric transfer of visual information in humans: spatial characteristics. The Journal of Physiology, 384(1), 633–647, https://doi.org/10.1113/jphysiol.1987.sp016474. [CrossRef] [PubMed]
Bi, T., Chen, J., Zhou, T., He, Y., & Fang, F. (2014). Function and structure of human left fusiform cortex are closely associated with perceptual learning of faces. Current Biology, 24(2), 222–227, https://doi.org/10.1016/j.cub.2013.12.028. [CrossRef]
Bi, T., Chen, N., Weng, Q., He, D., & Fang, F. (2010). Learning to discriminate face views. Journal of Neurophysiology, 104(6), 3305–3311, https://doi.org/10.1152/jn.00286.2010. [CrossRef] [PubMed]
Bi, T., & Fang, F. (2013). Neural plasticity in high-level visual cortex underlying object perceptual learning. Frontiers in Biology, 8(4), 434–443, https://doi.org/10.1007/s11515-013-1262-2.
Bouhassoun, S., Poirel, N., Hamlin, N., & Doucet, G. E. (2022). The forest, the trees, and the leaves across adulthood: Age-related changes on a visual search task containing three-level hierarchical stimuli. Attention, Perception, & Psychophysics, 84(3), 1004–1015, https://doi.org/10.3758/s13414-021-02438-3. [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436, https://doi.org/10.1163/156856897x00357. [PubMed]
Chang, D., Mevorach, C., Kourtzi, Z., & Welchman, A. E. (2014). Training transfers the limits on perception from parietal to ventral cortex. Current Biology, 24(20), 2445–2450, https://doi.org/10.1016/j.cub.2014.08.058.
Chen, N., Cai, P., Zhou, T., Thompson, B., & Fang, F. (2016). Perceptual learning modifies the functional specializations of visual cortical areas. Proceedings of the National Academy of Sciences, USA, 113(20), 5724–5729, https://doi.org/10.1073/pnas.1524160113.
Chowdhury, S. A., & DeAngelis, G. C. (2008). Fine discrimination training alters the causal contribution of macaque area MT to depth perception. Neuron, 60(2), 367–377, https://doi.org/10.1016/j.neuron.2008.08.023. [PubMed]
Contemori, G., Trotter, Y., Cottereau, B. R., & Maniglia, M. (2019). tRNS boosts perceptual learning in peripheral vision. Neuropsychologia, 125, 129–136, https://doi.org/10.1016/j.neuropsychologia.2019.02.001. [PubMed]
Donovan, I., & Carrasco, M. (2018). Endogenous spatial attention during perceptual learning facilitates location transfer. Journal of Vision, 18(11):7, 1–16, https://doi.org/10.1167/18.11.7.
Donovan, I., Shen, A., Tortarolo, C., Barbot, A., & Carrasco, M. (2020). Exogenous attention facilitates perceptual learning in visual acuity to untrained stimulus locations and features. Journal of Vision, 20(4):18, 1–19, https://doi.org/10.1167/jov.20.4.18.
Dorais, A., & Sagi, D. (1997). Contrast masking effects change with practice. Vision Research, 37(13), 1725–1733, https://doi.org/10.1016/s0042-6989(96)00329-x. [PubMed]
Dosher, B., Jeter, P., Liu, J., & Lu, Z.-L. (2013). An integrated reweighting theory of perceptual learning. Proceedings of the National Academy of Sciences, USA, 110(33), 13678–13683, https://doi.org/10.1073/pnas.1312552110.
Dosher, B., & Lu, Z.-L. (2016). Visual perceptual learning and models. Annual Review of Vision Science, 3, 343–363, https://doi.org/10.1146/annurev-vision-102016-061249.
Dumoulin, S. O., & Wandell, B. A. (2008). Population receptive field estimates in human visual cortex. NeuroImage, 39(2), 647–660, https://doi.org/10.1016/j.neuroimage.2007.09.034. [PubMed]
Fahle, M., & Edelman, S. (1993). Long-term learning in vernier acuity: Effects of stimulus orientation, range and of feedback. Vision Research, 33(3), 397–412, https://doi.org/10.1016/0042-6989(93)90094-d. [PubMed]
Fang, F., Kersten, D., & Murray, S. O. (2008). Perceptual grouping and inverse fMRI activity patterns in human visual cortex. Journal of Vision, 8(7):2, 1–9, https://doi.org/10.1167/8.7.2. [PubMed]
Fiorentini, A., & Berardi, N. (1980). Perceptual learning specific for orientation and spatial frequency. Nature, 287(5777), 43–44, https://doi.org/10.1038/287043a. [PubMed]
Furmanski, C. S., & Engel, S. A. (2000). Perceptual learning in object recognition: object specificity and size invariance. Vision Research, 40(5), 473–484, https://doi.org/10.1016/s0042-6989(99)00134-0. [PubMed]
Furmanski, C. S., Schluppeck, D., & Engel, S. A. (2004). Learning strengthens the response of primary visual cortex to simple patterns. Current Biology, 14(7), 573–578, https://doi.org/10.1016/j.cub.2004.03.032.
Gerlach, C., & Poirel, N. (2020). Who's got the global advantage? Visual field differences in processing of global and local shape. Cognition, 195, 104131, https://doi.org/10.1016/j.cognition.2019.104131. [PubMed]
Gilbert, C. D., & Li, W. (2012). Adult visual cortical plasticity. Neuron, 75(2), 250–264, https://doi.org/10.1016/j.neuron.2012.06.030. [PubMed]
Gölcü, D., & Gilbert, C. D. (2009). Perceptual learning of object shape. Journal of Neuroscience, 29(43), 13621–13629, https://doi.org/10.1523/jneurosci.2612-09.2009.
Gu, Y., Liu, S., Fetsch, C. R., Yang, Y., Fok, S., Sunkara, A., … Angelaki, D. E. (2011). Perceptual learning reduces interneuronal correlations in macaque visual cortex. Neuron, 71(4), 750–761, https://doi.org/10.1016/j.neuron.2011.06.015. [PubMed]
He, D., Kersten, D., & Fang, F. (2012). Opposite modulation of high- and low-level visual aftereffects by perceptual grouping. Current Biology, 22(11), 1040–1045, https://doi.org/10.1016/j.cub.2012.04.026.
He, Q., Yang, X.-Y., Gong, B., Bi, K., & Fang, F. (2022). Boosting visual perceptual learning by transcranial alternating current stimulation over the visual cortex at alpha frequency. Brain Stimulation, 15(3), 546–553, https://doi.org/10.1016/j.brs.2022.02.018. [PubMed]
Herpich, F., Melnick, F., Agosta, S., Huxlin, K. R., Tadin, D., & Battelli, L. (2019). Boosting learning efficacy with non-invasive brain stimulation in intact and brain-damaged humans. Journal of Neuroscience, 39(28), 5551–5561, https://doi.org/10.1523/jneurosci.3248-18.2019.
Huang, X., Xia, H., Zhang, Q., Blakemore, C., Nan, Y., Wang, W., … Pu, M. (2022). New treatment for amblyopia based on rules of synaptic plasticity: A randomized clinical trial. Science China Life Sciences, 65(3), 451–465, https://doi.org/10.1007/s11427-021-2030-6. [PubMed]
Jehee, J. F., Ling, S., Swisher, J. D., van Bergen, R. S., & Tong, F. (2012). Perceptual learning selectively refines orientation representations in early visual cortex. Journal of Neuroscience, 32(47), 16747–16753, https://doi.org/10.1523/JNEUROSCI.6112-11.2012.
Jing, R., Yang, C., Huang, X., & Li, W. (2021). Perceptual learning as a result of concerted changes in prefrontal and visual cortex. Current Biology, 31(20), 4521–4533, https://doi.org/10.1016/j.cub.2021.08.007.
Kahnt, T., Grueschow, M., Speck, O., & Haynes, J.-D. (2011). Perceptual learning and decision-making in human medial frontal cortex. Neuron, 70(3), 549–559, https://doi.org/10.1016/j.neuron.2011.02.054. [PubMed]
Karim, A. A., Schler, A., Hegner, Y. L., Friedel, E., & Godde, B. (2006). Facilitating effect of 15-Hz repetitive transcranial magnetic stimulation on tactile perceptual learning. Journal of Cognitive Neuroscience, 18(9), 1577–1585, https://doi.org/10.1162/jocn.2006.18.9.1577. [PubMed]
Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination: evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences, USA, 88(11), 4966–4970, https://doi.org/10.1073/pnas.88.11.4966.
Kattner, F., Cochrane, A., Cox, C. R., Gorman, T. E., & Green, C. S. (2017). Perceptual learning generalization from sequential perceptual training as a change in learning rate. Current Biology, 27(6), 840–846, https://doi.org/10.1016/j.cub.2017.01.046.
Kourtzi, Z., Betts, L. R., Sarkheil, P., & Welchman, A. E. (2005). Distributed neural plasticity for shape learning in the human visual cortex. PLoS Biology, 3(7), e204, https://doi.org/10.1371/journal.pbio.0030204. [PubMed]
Kourtzi, Z., Tolias, A. S., Altmann, C. F., Augath, M., & Logothetis, N. K. (2003). Integration of local features into global shapes: monkey and human fMRI studies. Neuron, 37(2), 333–346, https://doi.org/10.1016/s0896-6273(02)01174-1. [PubMed]
Kuai, S.-G., Levi, D., & Kourtzi, Z. (2013). Learning optimizes decision templates in the human visual cortex. Current Biology, 23(18), 1799–1804, https://doi.org/10.1016/j.cub.2013.07.052.
Kubilius, J., Baeck, A., Wagemans, J., & Op de Beeck, H. P. (2015). Brain-decoding fMRI reveals how wholes relate to the sum of parts. Cortex, 72, 5–14, https://doi.org/10.1016/j.cortex.2015.01.020. [PubMed]
Law, C. T., & Gold, J. I. (2008). Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area. Nature Neuroscience, 11(4), 505–513, https://doi.org/10.1038/nn2070. [PubMed]
Law, C. T., & Gold, J. I. (2009). Reinforcement learning can account for associative and perceptual learning on a visual-decision task. Nature Neuroscience, 12(5), 655–663, https://doi.org/10.1038/nn.2304. [PubMed]
Law, C. T., & Gold, J. I. (2010). Shared mechanisms of perceptual learning and decision making. Topics in Cognitive Science, 2(2), 226–238, https://doi.org/10.1111/j.1756-8765.2009.01044.x. [PubMed]
Lewis, C. M., Baldassarre, A., Committeri, G., Romani, G. L., & Corbetta, M. (2009). Learning sculpts the spontaneous activity of the resting human brain. Proceedings of the National Academy of Sciences, USA, 106(41), 17558–17563, https://doi.org/10.1073/pnas.0902455106.
Li, W. (2016). Perceptual learning: Use-dependent cortical plasticity. Annual Review of Vision Science, 2, 109–130, https://doi.org/10.1146/annurev-vision-111815-114351. [PubMed]
Liang, H., Gong, X., Chen, M., Yan, Y., Li, W., & Gilbert, C. D. (2017). Interactions between feedback and lateral connections in the primary visual cortex. Proceedings of the National Academy of Sciences, USA, 114(32), 8637–8642, https://doi.org/10.1073/pnas.1706183114.
Linde, M., Tendeiro, J. N., Selker, R., Wagenmakers, E.-J., & van Ravenzwaaij, D. (2021). Decisions about equivalence: A comparison of TOST, HDI-ROPE, and the Bayes factor [published online ahead of print November 4, 2021]. Psychological Methods. https://doi.org/10.1037/met0000402.
Lu, J., Luo, L., Wang, Q., Fang, F., & Chen, N. (2020). Cue-triggered activity replay in human early visual cortex. Science China Life Sciences, 64(1), 144–151, https://doi.org/10.1007/s11427-020-1726-5. [PubMed]
Lu, Z.-L., Lin, Z., & Dosher, B. A. (2016). Translating perceptual learning from the laboratory to applications. Trends in Cognitive Sciences, 20(8), 561–563, https://doi.org/10.1016/j.tics.2016.05.007. [PubMed]
Maniglia, M., & Seitz, A. R. (2018). Towards a whole brain model of perceptual learning. Current Opinion in Behavioral Sciences, 20, 47–55, https://doi.org/10.1016/j.cobeha.2017.10.004. [PubMed]
Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches for testing interval null hypotheses. Psychological Methods, 16(4), 406–419, https://doi.org/10.1037/a0024377. [PubMed]
Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9(3), 353–383, https://doi.org/10.1016/0010-0285(77)90012-3.
Nishina, S., Kawato, M., & Watanabe, T. (2009). Perceptual learning of global pattern motion occurs on the basis of local motion. Journal of Vision, 9(9):15, 1–6, https://doi.org/10.1167/9.9.15. [PubMed]
Op de Beeck, H. P., & Baker, C. I. (2010). The neural basis of visual object learning. Trends in Cognitive Sciences, 14(1), 22–30, https://doi.org/10.1016/j.tics.2009.11.002. [PubMed]
Op de Beeck, H. P., Baker, C. I., DiCarlo, J. J., & Kanwisher, N. G. (2006). Discrimination training alters object representations in human extrastriate cortex. Journal of Neuroscience, 26(50), 13025–13036, https://doi.org/10.1523/jneurosci.2481-06.2006.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision, 10(4), 437–442, https://doi.org/10.1163/156856897x00366. [PubMed]
Poggio, T., Fahle, M., & Edelman, S. (1992). Fast perceptual learning in visual hyperacuity. Science, 256(5059), 1018–1021, https://doi.org/10.1126/science.1589770. [PubMed]
Rokem, A., & Silver, M. A. (2010). Cholinergic enhancement augments magnitude and specificity of visual perceptual learning in healthy humans. Current Biology, 20(19), 1723–1728, https://doi.org/10.1016/j.cub.2010.08.027.
Sagi, D. (2011). Perceptual learning in Vision Research. Vision Research, 51(13), 1552–1566, https://doi.org/10.1016/j.visres.2010.10.019. [PubMed]
Schoups, A, Vogels, R., & Orban, G. A. (1995). Human perceptual learning in identifying the oblique orientation: retinotopy, orientation specificity and monocularity. The Journal of Physiology, 483(3), 797–810, https://doi.org/10.1113/jphysiol.1995.sp020623. [PubMed]
Schoups, A., Vogels, R., Qian, N., & Orban, G. (2001). Practising orientation identification improves orientation coding in V1 neurons. Nature, 412(6846), 549–553, https://doi.org/10.1038/35087601. [PubMed]
Schwartz, S., Maquet, P., & Frith, C. (2002). Neural correlates of perceptual learning: A functional MRI study of visual texture discrimination. Proceedings of the National Academy of Sciences, USA, 99(26), 17137–17142, https://doi.org/10.1073/pnas.242414599.
Shibata, K., Sasaki, Y., Bang, J., Walsh, E. G., Machizawa, M. G., Tamaki, M., … Watanabe, T. (2017). Overlearning hyperstabilizes a skill by rapidly making neurochemical processing inhibitory-dominant. Nature Neuroscience, 20(3), 470–475, https://doi.org/10.1038/nn.4490. [PubMed]
Shiu, L. P., & Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception & Psychophysics, 52(5), 582–588, https://doi.org/10.3758/bf03206720. [PubMed]
Sigman, M., & Gilbert, C. D. (2000). Learning to find a shape. Nature Neuroscience, 3(3), 264–269, https://doi.org/10.1038/72979. [PubMed]
Song, Y., Chen, N., & Fang, F. (2021). Effects of daily training amount on visual motion perceptual learning. Journal of Vision, 21(4):6, 1–9, https://doi.org/10.1167/jov.21.4.6. [PubMed]
Song, Y., Hu, S., Li, X., Li, W., & Liu, J. (2010). The role of top-down task context in learning to perceive objects. Journal of Neuroscience, 30(29), 9869–9876, https://doi.org/10.1523/jneurosci.0140-10.2010.
Sripati, A. P., & Olson, C. R. (2010). Responses to compound objects in monkey inferotemporal cortex: The whole is equal to the sum of the discrete parts. Journal of Neuroscience, 30(23), 7948–7960, https://doi.org/10.1523/jneurosci.0016-10.2010.
Stoll, S., Finlayson, N. J., & Schwarzkopf, D. S. (2020). Topographic signatures of global object perception in human visual cortex. NeuroImage, 220, 116926, https://doi.org/10.1016/j.neuroimage.2020.116926. [PubMed]
Ullman, S. (2007). Object recognition and segmentation by a fragment-based hierarchy. Trends in Cognitive Sciences, 11(2), 58–64, https://doi.org/10.1016/j.tics.2006.11.009. [PubMed]
van Doorn, J., van den Bergh, D., Böhm, U., Dablander, F., Derks, K., Draws, T., & Wagenmakers, E.-J. (2021). The JASP guidelines for conducting and reporting a Bayesian analysis. Psychonomic Bulletin & Review, 28(3), 813–826, https://doi.org/10.3758/s13423-020-01798-5. [PubMed]
Watanabe, T., & Sasaki, Y. (2015). Perceptual learning: Toward a comprehensive theory. Annual Review of Psychology, 66, 197–221, https://doi.org/10.1146/annurev-psych-010814-015214. [PubMed]
Xi, J., Zhang, P., Jia, W.-L., Chen, N., Yang, J., Wang, G.-T., … Huang, C.-B. (2020). Multi-stage cortical plasticity induced by visual contrast learning. Frontiers in Neuroscience, 14, 555701, https://doi.org/10.3389/fnins.2020.555701. [PubMed]
Yotsumoto, Y., Watanabe, T., Chang, L. H., & Sasaki, Y. (2013). Consolidated learning can be susceptible to gradually-developing interference in prolonged motor learning. Frontiers in Computational Neuroscience, 7, 69, https://doi.org/10.3389/fncom.2013.00069. [PubMed]
Yotsumoto, Y., Watanabe, T., & Sasaki, Y. (2008). Different dynamics of performance and brain activation in the time course of perceptual learning. Neuron, 57(6), 827–833, https://doi.org/10.1016/j.neuron.2008.02.034. [PubMed]
Yu, Q., Zhang, P., Qiu, J., & Fang, F. (2016). Perceptual learning of contrast detection in the human lateral geniculate nucleus. Current Biology, 26(23), 3176–3182, https://doi.org/10.1016/j.cub.2016.09.034.
Figure 1.
 
Stimuli and experimental protocols in Experiments 1 and 2. (A) Schematic descriptions of trials in the angle discrimination task (red bar), the orientation discrimination task in the right visual field (blue bar), and the orientation discrimination task in the left visual field (gray bar). (B) Experimental protocols for the configuration training group and the element training group in Experiment 1 (7-day training) and Experiment 2 (2-day training). At Pre and Post, the three tasks were counterbalanced across subjects.
Figure 1.
 
Stimuli and experimental protocols in Experiments 1 and 2. (A) Schematic descriptions of trials in the angle discrimination task (red bar), the orientation discrimination task in the right visual field (blue bar), and the orientation discrimination task in the left visual field (gray bar). (B) Experimental protocols for the configuration training group and the element training group in Experiment 1 (7-day training) and Experiment 2 (2-day training). At Pre and Post, the three tasks were counterbalanced across subjects.
Figure 2.
 
Results of Experiment 1. (A) Learning curves for the configuration training group and the element training group. Discrimination thresholds are plotted as a function of training day. (B) Discrimination thresholds for the angle discrimination task, the orientation discrimination task in the right visual field, and the orientation discrimination task in the left visual field measured at Pre and Post. (C) Improvements in angle and orientation discrimination performance for the two training groups at Post, relative to Pre (***p < 0.001, **p < 0.01, *p < 0.05). Error bars denote 1 SEM across subjects.
Figure 2.
 
Results of Experiment 1. (A) Learning curves for the configuration training group and the element training group. Discrimination thresholds are plotted as a function of training day. (B) Discrimination thresholds for the angle discrimination task, the orientation discrimination task in the right visual field, and the orientation discrimination task in the left visual field measured at Pre and Post. (C) Improvements in angle and orientation discrimination performance for the two training groups at Post, relative to Pre (***p < 0.001, **p < 0.01, *p < 0.05). Error bars denote 1 SEM across subjects.
Figure 3.
 
Results of Experiment 2 (2-day training); the results of Experiment 1 (7-day training) are presented here for comparison purposes. (A) Improvements in angle and orientation discrimination performance for the configuration training groups in Experiment 1 and Experiment 2 at Post, relative to Pre. (B) Improvements in angle and orientation discrimination performance for the element training groups in Experiment 1 and Experiment 2 at Post, relative to Pre (**p < 0.01). Error bars denote 1 SEM across subjects.
Figure 3.
 
Results of Experiment 2 (2-day training); the results of Experiment 1 (7-day training) are presented here for comparison purposes. (A) Improvements in angle and orientation discrimination performance for the configuration training groups in Experiment 1 and Experiment 2 at Post, relative to Pre. (B) Improvements in angle and orientation discrimination performance for the element training groups in Experiment 1 and Experiment 2 at Post, relative to Pre (**p < 0.01). Error bars denote 1 SEM across subjects.
Figure 4.
 
Experimental protocol and results of Experiment 3. Results of Experiment 2 (2-day training) are presented here for comparison purposes. (A) Experimental protocol. On each training day, subjects were trained with the orientation discrimination task in the left and right visual fields alternately. At Pre and Post, three tasks were counterbalanced across subjects. (B) Improvements in angle and orientation discrimination performance for the configuration training group in Experiment 2 and the interleaved element training group in Experiment 3 at Post, relative to Pre (*p < 0.05). Error bars denote 1 SEM across subjects.
Figure 4.
 
Experimental protocol and results of Experiment 3. Results of Experiment 2 (2-day training) are presented here for comparison purposes. (A) Experimental protocol. On each training day, subjects were trained with the orientation discrimination task in the left and right visual fields alternately. At Pre and Post, three tasks were counterbalanced across subjects. (B) Improvements in angle and orientation discrimination performance for the configuration training group in Experiment 2 and the interleaved element training group in Experiment 3 at Post, relative to Pre (*p < 0.05). Error bars denote 1 SEM across subjects.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×