October 2018
Volume 18, Issue 11
Open Access
Article  |   October 2018
Endogenous spatial attention during perceptual learning facilitates location transfer
Author Affiliations
  • Ian Donovan
    Department of Psychology, New York University, New York, NY, USA
  • Marisa Carrasco
    Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
    marisa.carrasco@nyu.edu
Journal of Vision October 2018, Vol.18, 7. doi:https://doi.org/10.1167/18.11.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ian Donovan, Marisa Carrasco; Endogenous spatial attention during perceptual learning facilitates location transfer. Journal of Vision 2018;18(11):7. https://doi.org/10.1167/18.11.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Covert attention and perceptual learning enhance perceptual performance. The relation between these two mechanisms is largely unknown. Previously, we showed that manipulating involuntary, exogenous spatial attention during training improved performance at trained and untrained locations, thus overcoming the typical location specificity. Notably, attention-induced transfer only occurred for high stimulus contrasts, at the upper asymptote of the psychometric function (i.e., via response gain). Here, we investigated whether and how voluntary, endogenous attention, the top-down and goal-based type of covert visual attention, influences perceptual learning. Twenty-six participants trained in an orientation discrimination task at two locations: half of participants received valid endogenous spatial precues (attention group), while the other half received neutral precues (neutral group). Before and after training, all participants were tested with neutral precues at two trained and two untrained locations. Within each session, stimulus contrast varied on a trial basis from very low (2%) to very high (64%). Performance was fit by a Weibull psychometric function separately for each day and location. Performance improved for both groups at the trained location, and unlike training with exogenous attention, at the threshold level (i.e., via contrast gain). The neutral group exhibited location specificity: Thresholds decreased at the trained locations, but not at the untrained locations. In contrast, participants in the attention group showed significant location transfer: Thresholds decreased to the same extent at both trained and untrained locations. These results indicate that, similar to exogenous spatial attention, endogenous spatial attention induces location transfer, but influences contrast gain instead of response gain.

Introduction
The total available sensory information at any given moment is far too much for the visual system to process at once. Selecting and efficiently evaluating the most important sensory signals is critical to function effectively. In the short-term, visual selective attention allows us to select relevant visual information, while in the long-term, perceptual learning (PL) refines how the system processes stimuli in future encounters. Attention and learning both improve performance, but it is largely unknown how these processes and their underlying mechanisms are related. 
Visual perceptual learning (VPL) is defined as improvement in a visual perceptual task due to practice or experience (for reviews, see Pouget & Bavelier, 2007; Sagi, 2011; Seitz, 2017; Watanabe & Sasaki, 2015). VPL is distinct from procedural learning, which involves learning the required motor response for performing a given task. Rather, it represents enhanced perceptual sensitivity or discriminability regarding the trained stimulus and task demands (for reviews, see Seitz, 2017; Watanabe & Sasaki, 2015), and thus serves as evidence of neuroplasticity in the adult, and even the elderly (DeLoss, Watanabe, & Andersen, 2015) brain. Such performance benefits can be long lasting (Karni & Sagi, 1993; Watanabe et al., 2002), even with very short training periods (Yashar & Carrasco, 2016; Yashar, Chen, & Carrasco, 2015). 
VPL is consistently reported to be highly specific: Performance improvements are usually constrained to trained stimuli and task. Relative to the trained conditions, performance suffers with new or untrained task parameters, such as the stimulus retinal location (e.g., Ball & Sekuler, 1982; Berardi & Fiorentini, 1987; Crist, Kapadia, Westheimer, & Gilbert, 1997; Fahle, Edelman, & Poggio, 1995; Schoups, Vogels, & Orban, 1995; Shiu & Pashler, 1992; Yashar et al., 2015), stimulus feature—orientation, contrast, motion direction (Ahissar & Hochstein, 1997; Berardi & Fiorentini, 1987; Fiorentini & Berardi, 1980, 1981; Watanabe, Nañez, & Sasaki, 2001)—and even the eye used to perform the task (Karni & Sagi, 1991; for reviews, see Sagi, 2011; Watanabe & Sasaki, 2015). Specificity is central to many theories and models of VPL. Location and feature specificity of VPL are often attributed to changes in primary visual cortex (V1; Ghose, Yang, & Maunsell, 2002; Gu et al., 2011; Watanabe et al., 2002; Yotsumoto, Watanabe, & Sasaki, 2008; G.-L. Zhang, Cong, Song, & Yu, 2013), as V1 neurons respond to precise retinal locations and primitive visual features. But studies have also implicated regions beyond early visual areas, including changes in connectivity between visual and decision-making areas (e.g., lateral intraparietal cortex, LIP), as well as changes within decision-making regions themselves (Chowdhury & DeAngelis, 2008; Jeter, Dosher, Liu, & Lu, 2010; Law & Gold, 2008). Notably, neurons in these higher level areas have larger receptive fields compared to early visual cortex and are less selective for spatial locations and specific visual features. Proposed models of PL have accounted for location specificity and transfer through reweighting sensory signals at the decision stage (Dosher, Jeter, Liu, & Lu, 2013; Jeter et al., 2010; Petrov, Dosher, & Lu, 2005). According to these models, specificity arises when reweighting occurs for early visual representations of the trained locations, whereas transfer arises when reweighting occurs for higher level representations that are more location-independent (for a review, see Dosher & Lu, 2017). Understanding why and how certain VPL tasks and training parameters promote specificity over generalization, and vice versa, is complex; it is likely that a broad set of cortical regions and networks underlie these processes (Maniglia & Seitz, 2018). 
Specificity represents a robust limit on VPL's ability to improve vision, as each stimulus location and feature parameter must, presumably, be trained in order to have general improvements across the visual field. A major clinical challenge is to devise more efficient training regimens that allow generalization of improvements during rehabilitation. Suitable PL training has been shown to improve visual performance in individuals with peripheral damage (Nahum, Nelken, & Ahissar, 2009), visual acuity in amblyopic (Levi, 2005; Levi & Li, 2009; Polat, Ma-Naim, Belkin, & Sagi, 2004; Polat, Ma-Naim, & Spierer, 2009; Xi, Jia, Feng, Lu, & Huang, 2014; J.-Y. Zhang, Cong, Klein, Levi, & Yu, 2014) and presbyopic (Polat et al., 2012; Sterkin et al., 2017) patients, contrast sensitivity in cortically blind patients (M. R. Cavanaugh, Barbot, Carrasco, & Huxlin, 2017; M. R. Cavanaugh et al., 2015; Sahraie et al., 2006), and visual motion discrimination in patients with V1 damage (M. R. Cavanaugh et al., 2017; Das, Tadin, & Huxlin, 2014; Huxlin et al., 2009). However, even with such interventions, the prognoses for these visual disorders remain poor. A greater understanding of the factors important to and mechanisms responsible for VPL generalization in the adult brain is crucial for creating effective visual rehabilitation protocols. Of particular interest is VPL location specificity and the potential for transfer to untrained locations, given that many vision disorders are characterized by functioning vision at some retinal locations and severe deficits at other locations. 
In contrast to the many studies showing that specificity is an inherent hallmark of VPL, some studies have shown that, with certain training procedures, PL generalizes to untrained locations, features, and tasks (Harris, Gliksberg, & Sagi, 2012; Hung & Seitz, 2014; Liu & Weinshall, 2000; Sasaki, Nañez, & Watanabe, 2010; Sowden, Rose, & Davies, 2002; Szpiro, Spering, & Carrasco, 2014; Wang, Zhang, Klein, Levi, & Yu, 2014; Xiao et al., 2008; T. Zhang, Xiao, Klein, Levi, & Yu, 2010). Many studies have specifically focused on exploring the conditions leading to location specificity versus transfer. Some factors reported to influence specificity include the length of training (Jeter et al., 2010), task precision (Jeter, Dosher, Petrov, & Lu, 2009), sensory adaptation (Harris et al., 2012), sensory uncertainty of stimulus features in visual search (Yashar & Denison, 2017), exposure to stimuli at untrained locations prior to training (T. Zhang et al., 2010), and variability in task difficulty (Hung & Seitz, 2014; but see Discussion of Wang et al., 2014). 
One of the most prominent training regimens reported to elicit transfer from trained to untrained retinal locations, known as “double training,” requires participants to perform a task with stimuli presented at the untrained retinal locations throughout training (Hung & Seitz, 2014; Wang, Zhang, Klein, Levi, & Yu, 2012; Xiao et al., 2008; Xie & Yu, 2017) or at some time before the post test (G.-L. Zhang et al., 2013; T. Zhang et al., 2010). A rule-based learning model has been proposed to account for these findings (Wang et al., 2016; Xie & Yu, 2017; G.-L. Zhang et al., 2013; J.-Y. Zhang et al., 2014; T. Zhang et al., 2010). This model suggests that PL primarily involves a high-level process in which observers learn “rules” for performing the task efficiently, and that specificity is a consequence of an inability to link signals from early visual cortex that represent untrained stimuli to the learned rule scheme. Critically, the model predicts transfer only if exposure to untrained stimuli locations or features occurs during or before training, because the rule-scheme must be learned first. More recent studies have revealed that Vernier learning can be “piggybacked,” that is, transferred to an untrained location, when training on Vernier acuity is paired with orientation or motion-direction training at the same trained location (Hung & Seitz, 2014; Wang et al., 2014). Double training studies provide additional evidence for the benefits of interleaving multiple tasks during training. For example, in both auditory (Wright, Sabin, Zhang, Marrone, & Fitzgerald, 2010) and visual (Szpiro, Wright, & Carrasco, 2014) tasks, when a given amount of training on one task is insufficient to promote learning on its own, training on another task using the same stimulus enables PL on both tasks. 
Attention has been postulated to play a critical role in VPL. Selective attention, the process by which a small subset of sensory information is selected and prioritized for processing, is critical for perception, learning, and memory. The role of selective attention in PL has been discussed for more than a decade (for reviews, see Ahissar & Hochstein, 2004; Goldstone, 1998; Ito, Westheimer, & Gilbert, 1998; W. Li, Piech, & Gilbert, 2004; Lu, Liu, & Dosher, 2009; Roelfsema & van Ooyen, 2005; Seitz & Watanabe, 2005; Tsushima & Watanabe, 2009; Watanabe & Sasaki, 2015). Attention's role in VPL has often been inferred in behavioral tasks in which it has been equated with task difficulty (Bartolucci & Smith, 2011; Huang & Watanabe, 2012), used interchangeably with conscious perception (Tsushima & Watanabe, 2009), or used to describe the fact that observers perform a task with a specific stimulus (Chirimuuta, Burr, & Morrone, 2007; Meuwese, Post, Scholte, & Lamme, 2013; Paffen, Verstraten, & Vidnyánszky, 2008; Seitz, Kim, & Watanabe, 2009; Watanabe et al., 2001; Watanabe & Sasaki, 2015). Likewise, attention's role in VPL has been indirectly inferred from changes in neural activity in attention-related brain areas (Mukai et al., 2007; Tsushima, Sasaki, & Watanabe, 2006). Thus, the link between attention and VPL is mostly speculative and remains poorly understood (Donovan, Szpiro, & Carrasco, 2015; Dosher, Han, & Lu, 2010; Szpiro & Carrasco, 2015). 
Visual attention can be covertly deployed (i.e., without accompanying eye movements) in a voluntary, conceptually driven manner (endogenous attention) or an involuntary, stimulus-driven fashion (exogenous). Both types of attention improve performance on a variety of tasks mediated by early visual processes (for reviews, see Carrasco, 2011, 2014). Because attention serves as one of the most important mechanisms for gating what and how efficiently information is processed, a greater understanding of VPL requires an understanding of how attention modulates it. Nonetheless, very few studies have directly manipulated attention to examine its effect: It has been reported that the effects of object-based attention decrease with training (Dosher et al., 2010), and feature-based attention facilitates recovery of motion perception in people with cortical blindness (M. R. Cavanaugh et al., 2017). Particularly relevant for the present study, there are three studies in which spatial attention has been manipulated (Donovan et al., 2015; Mukai, Bahadur, Kesavabhotla, & Ungerleider, 2011; Szpiro & Carrasco, 2015). 
Mukai et al. (2011) manipulated covert spatial attention in two separate groups of participants: One group trained with exogenous (involuntary, stimulus-driven) attentional cues, and the other trained with endogenous (voluntary, goal-driven) attentional cues. Training with either type of cue resulted in better performance when the target appeared at the cued location, but only those trained with exogenous cues exhibited lower thresholds after training. The authors interpreted these results to suggest that exogenous and endogenous attention may influence VPL via distinct mechanisms. Unfortunately, because all participants trained with all cues of differing validity (neutral, valid, and invalid), the attention effect could not be isolated. Specifically, the results cannot distinguish between the influences each cue type—valid, invalid, and neutral—as all three were used throughout training in each observer. 
Szpiro and Carrasco (2015), the first study to isolate the effects of exogenous attention during acquisition, revealed that attention can enable learning: Observers who trained with exogenous attention cues learned, but those who trained with neutral cues, under otherwise identical conditions, did not. In a different study, we found that training with exogenous attention facilitates transfer of increased performance in an orientation discrimination task to untrained locations, while training with neutral cues results in more location specificity. Specifically, exogenous spatial attention induced transfer via response gain (i.e., at the upper asymptote of the psychometric function; Donovan et al., 2015). 
Even with scant empirical evidence, several papers have relied on hypotheses regarding the role of attention on VPL (Ahissar & Hochstein, 2004; Dolan et al., 1997; Gilbert, Sigman, & Crist, 2001; Sasaki et al., 2010; Sasaki, Náñez, & Watanabe, 2012; Watanabe & Sasaki, 2015; Xiao et al., 2008; Yotsumoto & Watanabe, 2008). For example, attention has been considered a gate for PL (Ahissar & Hochstein, 2004; Roelfsema & van Ooyen, 2005; Roelfsema, van Ooyen, & Watanabe, 2010; Sasaki et al., 2010), and to have important implications for the emergence of transfer versus specificity (Fahle, 2009; Mukai et al., 2007; Sasaki et al., 2012; Wang et al., 2014; Watanabe & Sasaki, 2015; Yotsumoto & Watanabe, 2008; G.-L. Zhang et al., 2013; T. Zhang et al., 2010). Usually, mentions of attention in these contexts refer to either endogenous spatial attention or endogenous feature-based attention, and stress possible changes, due to training, in the voluntary allocation of attention to a certain spatial location or feature value. Thus far, only the effect of exogenous spatial attention on PL in neurotypical individuals has been isolated (Donovan et al., 2015; Szpiro & Carrasco, 2015), and it is unknown whether and how training with endogenous attention improves learning and alters specificity. Given differences between the perceptual effects of endogenous attention and exogenous attention (e.g., Barbot & Carrasco, 2017; Barbot, Landy, & Carrasco, 2012; Giordano, McElree, & Carrasco, 2009; Ling & Carrasco, 2006; for reviews, see Carrasco, 2011; Carrasco & Barbot, 2015) and neural substrates (Busse, Katzner, & Treue, 2008; Corbetta & Shulman, 2002; Dugué, Merriam, Heeger, & Carrasco, 2017), endogenous and exogenous attention may differentially influence VPL. 
Here, to isolate the influence of endogenous spatial attention, we adapted the protocol of our investigation of exogenous spatial attention's influence on location specificity (Donovan et al., 2015). Participants were tested on an orientation discrimination task with neutral cues before and after training. During training, half the participants were presented with valid endogenous cues (attention group), which directed participants to pay attention to the location of an upcoming target, while the other half of participants received neutral cues (neutral group). We found that (a) for both neutral and attention groups, learning at the trained locations arose via decreased contrast thresholds; and that (b) for the attention group only, learning transferred to untrained locations via contrast gain comparable to that found for the training locations. This pattern of results was corroborated by both repeated-measures analyses of variance (ANOVAs) and group-level Bayesian model comparison. These results indicate that endogenous spatial attention facilitates the transfer of orientation learning to untrained locations, and does so via contrast gain, a mechanism distinct from that of exogenous spatial attention on VPL, response gain. 
Methods
Participants
Twenty-six participants (20 females; M = 21.62 years old) participated in an orientation discrimination task for five consecutive sessions, one session per day.1 All participants had normal or corrected-to-normal vision, were naive to the purposes of the study, and had not participated in an orientation discrimination task prior to participation in this study. 
Apparatus
Stimuli were generated using PsychToolbox (Brainard, 1997; Pelli, 1997) in MATLAB (MathWorks, Natick, MA) and were displayed on a 21-in. CRT monitor (1,280 × 960 at 85 Hz). Eye position was monitored using an infrared eye tracker (Eyelink 1000 CL, SR Research, Kanata, Ontario, Canada). 
Stimuli and procedure
Stimuli were presented on a gray background. Figure 1 shows a trial sequence. Each trial started with the presentation of a white fixation cross (0.4° × 0.4°, degrees of visual angle [dva]) for 600 ms. A precue was then presented for 500 ms. The precue was either (a) neutral: two 0.75°-long black lines, starting 0.65° from fixation and pointing toward the two possible target locations along one diagonal (i.e., the top right and bottom left quadrants or vice versa); or (b) valid: one 0.75°-long black line, starting 0.65° from fixation and pointing toward the target location for that trial. Following a 400-ms interstimulus interval (ISI), one Gabor patch (4 cycles per degree sinusoidal grating in a Gaussian envelope; subtending 2°) was presented for 60 ms at one of four intercardinal (equidistant from horizontal and vertical meridian) isoeccentric locations, 5° from fixation (center-to-center). Following a 300-ms ISI, to eliminate location uncertainty, a response cue (black line 0.75° in length) was presented 0.65° from fixation, pointing toward the location where the target had just been presented, for 300 ms. After the response cue disappeared, a brief tone indicated that the participant should report the target orientation, either clockwise or counterclockwise relative to vertical, within 900 ms. Auditory feedback was provided after each trial, informing the participants of the accuracy of each response, and text feedback was provided at the end of each block informing participants of their percent correct on that block. Target contrast varied from 2%–64%, with a total of eight contrast levels (2%, 4%, 8%, 12%, 16%, 24%, 32%, and 64%), each occurring on an equal number of trials per block in random order. Participants were required to fixate at the center of the cross before the trial began, and stimulus presentation was contingent on maintaining fixation. If participants broke fixation at any point (1.5 dva from fovea) between the beginning of the trial until the beginning of the response period, the trial would end immediately, the fixation cross would turn red for 300 ms, and a trial with identical parameters (stimulus location, contrast, and tilt) would be added to the end of the block, ensuring the successful completion of all trials within the block without an eye movement. 
Figure 1
 
Trial sequence. Participants fixated on a central cross. A precue was presented for 500 ms, and was either two lines pointing to the two possible target locations (neutral) or the location of the upcoming target (valid). After a 400-ms ISI, the target, a Gabor patch tilted either clockwise or counterclockwise, was presented for 60 ms. After a 300-ms delay, a response cue appeared for 300 ms, indicating the location at which the target appeared. Following response-cue offset, participants were given 900 ms to report the orientation of the target: clockwise or counterclockwise.
Figure 1
 
Trial sequence. Participants fixated on a central cross. A precue was presented for 500 ms, and was either two lines pointing to the two possible target locations (neutral) or the location of the upcoming target (valid). After a 400-ms ISI, the target, a Gabor patch tilted either clockwise or counterclockwise, was presented for 60 ms. After a 300-ms delay, a response cue appeared for 300 ms, indicating the location at which the target appeared. Following response-cue offset, participants were given 900 ms to report the orientation of the target: clockwise or counterclockwise.
Figure 2 illustrates the training schedule. On the first session, the pretest, each participant completed 60–100 trials of practice to familiarize them with the procedure and reduce procedural learning during the experimental blocks to accurately measure baseline performance on the task. Following practice, participants completed two staircases (psi method) of 100 trials each to determine the orientation difference from vertical that would yield 75% accuracy at 64% contrast. The pretest session contained four blocks of 256 trials each (eight trials at each location at each contrast, clockwise and counterclockwise stimuli counterbalanced), with short breaks after every quarter of a block (64 trials) and between each block. Within a single block, the target appeared at one of two locations located along the same diagonal (i.e., top left and bottom right in one block, top right and bottom left in the other). The tested diagonal alternated between blocks. All trials had a neutral precue. The posttest was identical to the pretest except that it was administered after the training sessions. 
Figure 2
 
Training and testing schedule. Participants were tested at four locations, two locations per block, before and after 3 days of training at two diagonal locations. Half of the participants were trained with all valid cues (attention group), and half had with neutral cues (neutral groups). All participants received only neutral cues on the pre- and posttests.
Figure 2
 
Training and testing schedule. Participants were tested at four locations, two locations per block, before and after 3 days of training at two diagonal locations. Half of the participants were trained with all valid cues (attention group), and half had with neutral cues (neutral groups). All participants received only neutral cues on the pre- and posttests.
The middle three sessions were training sessions. Half of the participants were in the neutral training group, in which the precue was neutral on all trials during training sessions. The other half of participants were in the attention training group, in which precues during training were valid central cues. All participants were presented with stimuli appearing at one of two locations along the same diagonal (i.e., top left and bottom right, or top right and bottom left) per block. During training, the target stimulus would always appear at one of two locations along the same diagonal for all blocks; thus, these locations were the trained locations, and the remaining two locations were the untrained locations. 
Results
We investigated whether and how performance changed between the pre- and the posttests at the trained locations compared to performance at the untrained locations. To do so, we measured the difference in accuracy and reaction time (RT) and compared performance (a) between the pre- and posttest; and (b) between the trained and untrained locations. Performance at both trained locations was analyzed collectively within each session, and the same for both untrained locations. Overall accuracy (collapsed across contrast levels) was quantified as the proportion correct at each location within a session. To assess whether there was a speed-accuracy trade-off, we also calculated geometric means of RT (correct trials only) for each location and session. 
For overall accuracy, a three-way mixed ANOVA (location: trained vs. untrained, session: pretest vs. posttest, and group: neutral vs. attention) was conducted. The three-way interaction was not significant, indicating similar changes in overall accuracy or RT between the pre- and posttests for both groups. There was a main effect of session, F(1, 24) = 57.029, p < 0.001, indicating that performance improved with training, and an interaction between session and location, F(1, 24) = 22.823, p < 0.001, indicating that the performance improvement was more pronounced at the trained locations. In Figure 3, accuracy is plotted as a function of session, with each point being one block. Within each session, accuracy was similar across blocks, with no systematic change. During the posttest, in both groups, the difference in accuracy between trained and untrained locations appears only in the first two blocks (i.e., the first block for either location type). An ANOVA for d′ also showed no significant three-way interaction, only a main effect of session, F(1, 24) = 61.419, p < 0.001. For correct RT, there was also a main effect of session, F(1, 24) = 11.385, p < 0.01, indicating that RT decreased with training. These results indicate that there was no speed–accuracy tradeoff. 
Figure 3
 
Accuracy (percent correct) across sessions. Each data point is one block. Light colors correspond to the neutral group, and dark colors correspond to the attention group. Blue corresponds to trained locations, and red corresponds to untrained locations. Error bars indicate standard error of the mean. In the pre- and posttest, the location types alternated between blocks. For illustration purposes, trained location is plotted before untrained locations, but note that the order was randomized between subjects, and the order was the same on the pre- and posttest within subjects.
Figure 3
 
Accuracy (percent correct) across sessions. Each data point is one block. Light colors correspond to the neutral group, and dark colors correspond to the attention group. Blue corresponds to trained locations, and red corresponds to untrained locations. Error bars indicate standard error of the mean. In the pre- and posttest, the location types alternated between blocks. For illustration purposes, trained location is plotted before untrained locations, but note that the order was randomized between subjects, and the order was the same on the pre- and posttest within subjects.
We also assessed changes across the psychometric function, as assessing only aggregate performance may obscure improvements at a subset of contrast values. To this end, performance was evaluated as percent correct at each stimulus contrast. The data were fit by a Weibull function:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}y\left( x \right) = 0.5 + \left( {1 - 0.5 - \lambda } \right) \times \left( {1 - {e^{ - {{\left( {x/\alpha } \right)}^\beta }}}} \right)\end{equation}
using a maximum likelihood criterion, where y represents the performance as a function of contrast x, λ is the lapse rate (1 minus the asymptotic performance at high contrast values), α is the contrast at which the observer achieves 63.21% of the asymptotic performance, and β determines the slope of the psychometric function. Figure 4 shows the aggregate performance and fitted functions (averaged bootstrapped curve fits) for both groups, with bootstrapped confidence intervals for asymptote (1 − λ) and threshold (α).  
Figure 4
 
Psychometric functions for bootstrapped data across subjects within the neutral and attention. Light-colored curves are pretest performance, while dark-colored curves are posttest performance. Blue curves are trained locations, while red curves are untrained locations. Shaded regions represent 95% bootstrapped confidence intervals for α and 1 − λ.
Figure 4
 
Psychometric functions for bootstrapped data across subjects within the neutral and attention. Light-colored curves are pretest performance, while dark-colored curves are posttest performance. Blue curves are trained locations, while red curves are untrained locations. Shaded regions represent 95% bootstrapped confidence intervals for α and 1 − λ.
We assessed the difference in learning at the trained and untrained locations between the neutral and attention groups before and after training using a three-way ANOVA separately for three aspects of these fitted curves: (a) α: a measure of threshold; (b) asymptotic performance, 1 − λ (arc-sine square root transform; Burnett, Close, d'Avossa, & Sapir, 2016; Donovan et al., 2015; Sokal & Rohlf, 1981; White, Lunau, & Carrasco, 2014): the percent of accuracy at which the psychometric function saturates at higher contrast values; (c) β: the slope of the psychometric function. For asymptote (transform) and β, there was no three-way interaction (asymptote, F[1, 24] = 1.223, p > 0.10; β, F[1, 24] = 1.125, p < 0.1), indicating no difference between groups in the change between pre- and posttests for the trained and untrained locations. 
For α, however, there was a significant three-way interaction of session, location, and cue type, F(1, 24) = 5.137, p < 0.05, and a main effect of session, F(1, 24) = 14.152, p < 0.01. Figure 5 shows the mean α for each location type and group in the pre- and posttest. A two-way ANOVA for the neutral group (Session × Location Interaction: F[1, 12] = 7.492, p < 0.02; main effect of session: F[1, 12] = 5.788, p < 0.05) revealed that this difference was due to the fact that α decreased (i.e., improved) for the trained location, t(12) = 3.199, p < 0.01, but not for the untrained location, t(12) = 0.123, p > 0.10. Conversely, in the attention group, there was no Session × Location Interaction, F(1, 12) < 1, but there was a main effect of session, F(1, 12) = 8.603, p < 0.03, indicating that α decreased to a similar degree for the trained and untrained locations. 
Figure 5
 
Threshold values (α) for each group, location type, and session. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 5
 
Threshold values (α) for each group, location type, and session. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Importantly, α in the pretest was not significantly different between the trained and untrained locations in either group (neutral: t[12] = 1.634, p > 0.1; attention: t[12] = 1.711, p > 0.1), indicating that the results were due to training and not an initial difference in threshold for the different locations. To further allay the possible concern that the differential results could have emerged from differences in pretest performance, we removed the participant in the neutral group with the lowest pretest threshold at the untrained location, and the participant in the attention group with the corresponding highest threshold. An ANOVA with these 24 participants yielded the same results (three-way interaction of session, location, and cue type: F[1, 22] = 4.653, p < 0.05; main effect of session: F[1, 22] = 13.240, p < 0.01). 
To conclude, there was location specificity at threshold (α) for the neutral group, but full location transfer for the attention group. Figure 6 shows the change in α, relative to the pretest, for the neutral and attention groups across all sessions. 
Figure 6
 
Threshold values (α) for each group, location type, and session, relative to the pretest. Light colors correspond to the neutral group, while dark colors correspond to the attention group. Blue corresponds to trained locations, while red corresponds to untrained locations. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 6
 
Threshold values (α) for each group, location type, and session, relative to the pretest. Light colors correspond to the neutral group, while dark colors correspond to the attention group. Blue corresponds to trained locations, while red corresponds to untrained locations. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Model comparison
Although the results of our ANOVA are clear regarding the changes in α, we further verified that the best explanation for our data is location specificity of threshold improvements in the neutral group, and location transfer of threshold improvements in the attention group. To that end, we conducted a model comparison for each observer, in which we fit psychometric curves to pre- and posttest performance for the trained and untrained locations. Trained and untrained locations were modeled separately, given that performance at different retinal locations may vary, and because we aim to address the change performance at each location before and after training. Each model differed in which of the free parameters (λ, α, and β) could vary between the pre- and posttests fits, and trained and untrained locations were fit separately. For example, in one model, only α would vary, so that the fits for the pre- and posttest at the trained location would have to share the same λ and β, but could have different values for α. We tested eight models, in which all combinations of free parameters were fit for trained and untrained locations in each individual, including the null model, which did not allow any parameters to vary between the pre- and posttests. 
For each model we calculated goodness of fit (conservative Akaike information criterion [AICc]; Burnham & Anderson, 2004; J. E. Cavanaugh, 1997). This measure indicates which model best fits the data on an individual level, while penalizing for a greater number of free parameters. To assess the effects on a group level, and specifically to identify which model best fits the attention versus neutral groups for the trained versus untrained locations, we conducted an additional Bayesian analysis that uses the AICc value for each model across each individual at each location. Specifically, we used a hierarchical Bayesian method that estimates the probability of each model at the group level by treating the goodness-of-fit metric for each model as a random variable (Stephan, Penny, Daunizeau, Moran, & Friston, 2009). The result is the quantification of the relative probability (exceedance probability) at a group level, for each location condition. Each model is given an exceedance probability value between 0 and 1, which all together sums to 1. The model with the highest exceedance probability is considered the most likely model to explain the data. This approach is adept at dealing with intersubject variability, and also benefits from utilizing all of the data across the psychometric function for each subject, in each test session, at each location, rather than the frequentist approach of assessing one parameter from a function fit for each condition independently. 
Results of this analysis are shown in Figure 7. All exceedance probabilities were calculated within each group separately for each location, and thus are meaningful only relative to others within the same group and location condition. For the neutral group, the results show that at the trained location the model allowing only α (threshold) to vary possessed the greatest probability, 42.86%. At the untrained location, the most probable model allows only λ (1 – asymptote) to vary between pre- and posttests, with an exceedence probability of 90.45%. This is consistent with our previous findings (Donovan et al., 2015), as the neutral group showed improved asymptote at trained and untrained locations, but greater increases for trained locations. 
Figure 7
 
Exceedance probabilities for each model tested via group-level Bayesian model comparison. Higher values indicate greater probability of the data given each model. Models are distinguished by which parameters of the Weibull function can vary between pre- and posttest fits at each location for each subject.
Figure 7
 
Exceedance probabilities for each model tested via group-level Bayesian model comparison. Higher values indicate greater probability of the data given each model. Models are distinguished by which parameters of the Weibull function can vary between pre- and posttest fits at each location for each subject.
For the attention group, the highest probability model allows α to vary between pre- and posttest, but β and λ must remain constant. This is true for both trained and untrained locations, with exceedance probabilities of 29.06% and 58.31%, respectively. This result confirms that in the attention group, improvements at the trained and untrained location both arise from a decrease in threshold, as changing only α between pre- and posttests best explains the data. 
These model comparisons provide converging evidence that training with endogenous cues elicits improvements in threshold at both the trained and untrained locations, while training with neutral cues improves threshold only at the trained location. Evidence for a change in asymptote (1 − λ) at the untrained location in the neutral group is consistent with our previous findings that with a neutral training protocol, some learning occurs at the highest contrast values at all locations. Notably, there was only a main effect of session, F(1, 24) = 30.254, p < 0.001, and there was not an interaction of session, location, and group for λ in our ANOVA. Further, there was no Location × Session interaction within the neutral group itself, F(1, 12) = 0.110, p > 0.1. 
Discussion
We find that endogenous attention facilitates transfer of orientation discrimination learning to untrained retinal locations. Training with endogenous attention precues results in improvements at intermediate contrast values, resulting in decreased thresholds at trained and untrained locations. This effect was verified by both repeated-measures ANOVAs and group-level Bayesian model comparison. Because groups only differed in the way in which attention was allocated during training—distributed in the neutral group or selective in the attention group—and stimulus and task were constant, we can rule out the possible role of other factors that contribute to location transfer: length of training (Jeter et al., 2010), task precision (Jeter et al., 2009), sensory adaptation (Harris et al., 2012), sensory uncertainty of stimulus features (Yashar & Denison, 2017), exposure to stimuli at untrained locations prior to training (T. Zhang et al., 2010), and variability in task difficulty (Hung & Seitz, 2014). 
Endogenous spatial attention leading to transfer of threshold improvements is reminiscent of contrast gain, which is often an effect of endogenous spatial attention in the short term (Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010; X. Li, Lu, Tjan, Dosher, & Chu, 2008; Ling & Carrasco, 2006; Martinez-Trujillo & Treue, 2002; Morrone, Denti, & Spinelli, 2004; Pestilli, Ling, & Carrasco, 2009; Reynolds & Chelazzi, 2004). Notably, our previous study demonstrated that exogenous spatial attention facilities location transfer via increased asymptotic performance (Donovan et al., 2015), which resembles response gain, a typical effect of exogenous attention in the short term (Herrmann et al., 2010; Pestilli & Carrasco, 2005; Pestilli et al., 2009; Pestilli, Viera, & Carrasco, 2007). This relation suggests that the type of attention manipulated during training and accompanying short-term improvements in performance will, over time, lead to the same class of performance changes (i.e., response gain vs. contrast gain) after training is completed. What is most interesting, in our view, is that the change in behavior due to attention at the target location carries over, after training, to untrained locations, even though the effects of attention are local to the target location. Thus, the underlying neural circuits responsible for the sensory representations in early visual areas at the untrained locations were not directly altered during training. At this point, our results of location transfer of exogenous and endogenous spatial attention are merely descriptive, and we lack a computational theory to explain how the specific kind of location transfer arises from both types of training. 
Notably, for endogenous and exogenous spatial attention, the relation between the size of the stimulus and the relative size of the attention field can induce either contrast gain or response gain (Herrmann et al., 2010), as instantiated in the Reynolds and Heeger (2009) normalization model of attention. Given that our stimuli were presented in a sparse display with no distracting stimuli and no placeholders, our manipulation of endogenous spatial attention resembles the scenarios that induce contrast gain in the short term: an attention field of the same size or larger than the target stimulus. One possibility is that endogenous attention training with a stimulus display that encourages a small attention field size and has a relatively large stimulus, would lead to response gain in the short term, and may thus result in location transfer via improved asymptote at the trained and untrained locations. This is an open question we will investigate in the future. In the meantime, our previous (Donovan et al., 2015) and current findings suggest that to improve contrast sensitivity across the contrast response function and generalize such benefits to different locations, we may want to train observers with both endogenous and exogenous attention. 
We ensured participants fixated on a central point in order to ensure that stimuli always appeared at the intended retinal locations, and to avoid concomitant changes in perception due to eye movements. In particular, presaccadic enhancement and remapping have been shown to alter perceptual performance just before a saccade at both the location at which the fovea will land after a saccade, as well as where a currently attended location will land after the eye movement (i.e., the remapped location; Deubel, 2008; H.-H. Li, Barbot, & Carrasco, 2016; Montagnini & Castet, 2007; Ohl, Kuper, & Rolfs, 2017; Rolfs & Carrasco, 2012). There is evidence that the effects of presaccadic attention shifts are distinct from endogenous spatial attention, as the presaccadic effect on contrast sensitivity is much faster than that of covert attention (Rolfs & Carrasco, 2012) and the presaccadic effect on spatial resolution reveals an automatic enhancement of high spatial frequencies (H.-H. Li et al., 2016), not present with endogenous covert attention (Barbot & Carrasco, 2017). Consistent with these differences, training with a presaccadic shift protocol does not enhance performance at untrained locations, even those that received the presaccadic enhancement due to remapping (Rolfs, Murray-Smith, & Carrasco, 2018). In contrast, our current findings show that training with valid endogenous attention improves thresholds at trained and untrained locations, thus providing further evidence for the distinction between presaccadic attention shifts and endogenous covert spatial attention. 
This study builds upon other investigations by our lab and collaborators that have isolated the effects of various types of attention on PL. Besides location transfer with exogenous attention (Donovan et al., 2015), one study indicated that training with valid exogenous cues enables learning to occur, specifically when identical training with neutral cues is not sufficient to result in improved performance (Szpiro & Carrasco, 2015). Additionally, feature-based attention has been shown to markedly improve the efficacy of visual training in patients with damage to early visual cortex, effectively aiding the spatial generalization of performance improvements with a blind field or scotoma (M. R. Cavanaugh et al., 2017). This line of work has revealed that the manipulation of attention represents a powerful tool to improve and generalize PL. 
As discussed in the Introduction, attention has been associated with PL in many contexts. One possibility is that the ability to efficiently allocate attention is improved through training. For example, performance in a dual task has been shown to no longer differ from a single task after learning (Chirimuuta et al., 2007), and training on conventional video games (especially first-person shooters) is associated with benefits in a wide range of visual tasks that engage attention (for review, see Bediou et al., 2018). It may be tempting to infer that our current findings are evidence that participants learn to better allocate their attention after training, specifically in the attention group. However, notably, the neutral group was trained with neutral precues (i.e., the same condition that they were tested in), which required them to distribute attention to two locations on each trial. The attention group, trained with valid endogenous cues, had a relative advantage during training as they could allocate attention to only one location, but perhaps a relative disadvantage during the posttest, as they were not tested on this type of attentional allocation. Despite this, the attention group exhibited a comparable decrease in thresholds at the trained and untrained locations, whereas the neutral group exhibited location specificity. Note that observers' improvements in the attention group cannot be due to learning to better allocate attention per se, as they were tested in the neutral condition, which differs from their training in that they have to distribute attention to two locations instead of one. We interpret our findings to reflect that the selective allocation of attention during training improves perceptual discrimination across locations. 
The findings revealed by the present study are highly relevant to our understanding of VPL, as we isolate, for the first time, the influence of endogenous spatial attention on PL. Instead of a mere improvement in the efficacy of perceptual training (i.e., greater improvements in accuracy), we show that endogenous spatial attention specifically transfers improvements in threshold from trained to untrained locations. Given that many theories and models of VPL have speculated or inferred the role of attention, specifically using language that implies the involvement of endogenous spatial attention, the framework around these theories should take into account the precise mechanisms suggested by these results, as well as the few other studies that have isolated the role of other forms of attention. 
Acknowledgments
We would like to thank Rodrigo Delatorre and Ying “Joey” Zhou for their assistance in data collection, as well as the members of the Carrasco Lab for their helpful comments. The research was funded by the following sources: National Institute of Health (NIH) 5T32 – EY007136 (Vision Training Grant), NIH RO1 – EY016200 (to MC), and NIH RO1 – EY027401 (to MC). 
Commercial relationships: none. 
Corresponding author: Marisa Carrasco. 
Address: Department of Psychology and Center for Neural Science, New York University, New York, NY, USA. 
References
Ahissar, M., & Hochstein, S. (1997, May 22). Task difficulty and the specificity of perceptual learning. Nature, 387 (6631), 401–406, https://doi.org/10.1038/387401a0.
Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8 (10), 457–464, https://doi.org/10.1016/j.tics.2004.08.011.
Ball, K., & Sekuler, R. (1982, November 12). A specific and enduring improvement in visual motion discrimination. Science, 218 (4573), 697–698.
Barbot, A., & Carrasco, M. (2017). Attention modifies spatial resolution according to task demands. Psychological Science, https://doi.org/10.1177/0956797616679634.
Barbot, A., Landy, M. S., & Carrasco, M. (2012). Differential effects of exogenous and endogenous attention on second-order texture contrast sensitivity. Journal of Vision, 12 (8): 6, 1–15, https://doi.org/10.1167/12.8.6. [PubMed] [Article]
Bartolucci, M., & Smith, A. T. (2011). Attentional modulation in visual cortex is modified during perceptual learning. Neuropsychologia, 49 (14), 3898–3907, https://doi.org/10.1016/j.neuropsychologia.2011.10.007.
Bediou, B., Adams, D. M., Mayer, R. E., Tipton, E., Green, C. S., & Bavelier, D. (2018). Meta-analysis of action video game impact on perceptual, attentional, and cognitive skills. Psychological Bulletin, 144 (1), 77–110, https://doi.org/10.1037/bul0000130.
Berardi, N., & Fiorentini, A. (1987). Interhemispheric transfer of visual information in humans: Spatial characteristics. The Journal of Physiology, 384, 633–647.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436.
Burnett, K. E., Close, A. C., d'Avossa, G., & Sapir, A. (2016). Spatial attention can be biased towards an expected dimension. Quarterly Journal of Experimental Psychology, 69 (11), 2218–2232, https://doi.org/10.1080/17470218.2015.1111916.
Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33 (2), 261–304, https://doi.org/10.1177/0049124104268644.
Busse, L., Katzner, S., & Treue, S. (2008). Temporal dynamics of neuronal modulation during exogenous and endogenous shifts of visual attention in macaque area MT. Proceedings of the National Academy of Sciences, 105 (42), 16380.
Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research, 51 (13), 1484–1525, https://doi.org/10.1016/j.visres.2011.04.012.
Carrasco, M. (2014). Spatial attention: Perceptual modulation. In Kastner S. & Nobre A. C. (Eds.), The Oxford handbook of attention (pp. 183–230). New York, NY: Oxford University Press.
Carrasco, M., & Barbot, A. (2015). How attention affects spatial resolution. Cold Spring Harbor Symposium on Quantitative Biology, 79, 149–160, https://doi.org/10.1101/sqb.2014.79.024687.
Cavanaugh, J. E. (1997). Unifying the derivations for the Akaike and corrected Akaike information criteria. Statistics & Probability Letters, 33 (2), 201–208, https://doi.org/10.1016/S0167-7152(96)00128-9.
Cavanaugh, M. R., Barbot, A., Carrasco, M., & Huxlin, K. R. (2017). Feature-based attention potentiates recovery of fine direction discrimination in cortically blind patients. Neuropsychologia, https://doi.org/10.1016/j.neuropsychologia.2017.12.010.
Cavanaugh, M. R., Zhang, R., Melnick, M. D., Das, A., Roberts, M., Tadin, D.,… Huxlin, K. R. (2015). Visual recovery in cortical blindness is limited by high internal noise. Journal of Vision, 15 (10): 9, 1–18, https://doi.org/10.1167/15.10.9. [PubMed] [Article]
Chirimuuta, M., Burr, D., & Morrone, M. C. (2007). The role of perceptual learning on modality-specific visual attentional effects. Vision Research, 47 (1), 60–70, https://doi.org/10.1016/j.visres.2006.09.002.
Chowdhury, S. A., & DeAngelis, G. C. (2008). Fine discrimination training alters the causal contribution of macaque area MT to depth perception. Neuron, 60 (2), 367–377, https://doi.org/10.1016/j.neuron.2008.08.023.
Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3, 201, https://doi.org/10.1038/nrn755.
Crist, R. E., Kapadia, M. K., Westheimer, G., & Gilbert, C. D. (1997). Perceptual-learning of spatial localization: Specificity for orientation, position, and context. Journal of Neurophysiology, 78, 2889–2894.
Das, A., Tadin, D., & Huxlin, K. R. (2014). Beyond blindsight: Properties of visual relearning in cortically blind fields. The Journal of Neuroscience, 34 (35), 11652–11664, https://doi.org/10.1523/jneurosci.1076-14.2014.
DeLoss, D. J., Watanabe, T., & Andersen, G. J. (2015). Improving vision among older adults: Behavioral training to improve sight. Psychological Science, 26 (4), 456–466, https://doi.org/10.1177/0956797614567510.
Deubel, H. (2008). The time course of presaccadic attention shifts. Psychological Research, 72 (6), 630.
Dolan, R., Fink, G., Rolls, E., Booth, M., Holmes, A., Frackowiak, R., & Friston, K. (1997, October 9). How the brain learns to see objects and faces in an impoverished context. Nature, 389 (6651), 596.
Donovan, I., Szpiro, S., & Carrasco, M. (2015). Exogenous attention facilitates location transfer of perceptual learning. Journal of Vision, 15 (10): 11, 1–16, https://doi.org/10.1167/15.10.11. [PubMed] [Article]
Dosher, B. A., Han, S., & Lu, Z. L. (2010). Perceptual learning and attention: Reduction of object attention limitations with practice. Vision Research, 50 (4), 402–415, https://doi.org/10.1016/j.visres.2009.09.010.
Dosher, B. A., Jeter, P., Liu, J., & Lu, Z. L. (2013). An integrated reweighting theory of perceptual learning. Proceedings of the National Academy of Sciences, USA, 110 (33), 13678–13683, https://doi.org/10.1073/pnas.1312552110.
Dosher, B. A., & Lu, Z.-L. (2017). Visual perceptual learning and models. Annual Review of Vision Science, 3 (1), 343–363, https://doi.org/10.1146/annurev-vision-102016-061249.
Dugué, L., Merriam, E. P., Heeger, D. J., & Carrasco, M. (2017). Specific visual sub-regions of TPJ mediate reorienting of spatial attention. Cereb Cortex, 28 (7) 2375–2390.
Fahle, M. (2009). Perceptual learning and sensomotor flexibility: Cortical plasticity under attentional control? Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences, 364 (1515), 313–319, https://doi.org/10.1098/rstb.2008.0267.
Fahle, M., Edelman, S., & Poggio, T. (1995). Fast perceptual learning in hyperacuity. Vision Research, 35 (21), 3003–3013, https://doi.org/0042-6989(95)00044-Z.
Fiorentini, A., & Berardi, N. (1980, September 4). Perceptual learning specific for orientation and spatial frequency. Nature, 287 (5777), 43–44.
Fiorentini, A., & Berardi, N. (1981). Learning in grating waveform discrimination: Specificity for orientation and spatial frequency. Vision Research, 21 (7), 1149–1158, https://doi.org/0042-6989(81)90017-1.
Ghose, G. M., Yang, T., & Maunsell, J. H. (2002). Physiological correlates of perceptual learning in monkey V1 and V2. Journal of Neurophysiology, 87 (4), 1867–1888, https://doi.org/10.1152/jn.00690.2001.
Gilbert, C. D., Sigman, M., & Crist, R. E. (2001). The neural basis of perceptual learning. Neuron, 31 (5), 681–697, https://doi.org/S0896-6273(01)00424-X.
Giordano, A. M., McElree, B., & Carrasco, M. (2009). On the automaticity and flexibility of covert attention: A speed-accuracy trade-off analysis. Journal of Vision, 9 (3): 30, 1–10, https://doi.org/10.1167/9.3.30. [PubMed] [Article]
Goldstone, R. L. (1998). Perceptual learning. Annual Review of Psychology, 49, 585–612, https://doi.org/10.1146/annurev.psych.49.1.585.
Gu, Y., Liu, S., Fetsch, Christopher R., Yang, Y., Fok, S., Sunkara, A.,… Angelaki, Dora E. (2011). Perceptual learning reduces interneuronal correlations in macaque visual cortex. Neuron, 71 (4), 750–761, https://doi.org/10.1016/j.neuron.2011.06.015.
Harris, H., Gliksberg, M., & Sagi, D. (2012). Generalized perceptual learning in the absence of sensory adaptation. Current Biology, 22 (19), 1813–1817, https://doi.org/10.1016/j.cub.2012.07.059.
Herrmann, K., Montaser-Kouhsari, L., Carrasco, M., & Heeger, D. J. (2010). When size matters: Attention affects performance by contrast or response gain. Nature Neuroscience, 13 (12), 1554–1559, https://doi.org/10.1038/nn.2669.
Huang, T.-R., & Watanabe, T. (2012). Task attention facilitates learning of task-irrelevant stimuli. PLoS One, 7 (4), e35946, https://doi.org/10.1371/journal.pone.0035946.
Hung, S. C., & Seitz, A. R. (2014). Prolonged training at threshold promotes robust retinotopic specificity in perceptual learning. The Journal of Neuroscience, 34 (25), 8423–8431, https://doi.org/10.1523/jneurosci.0745-14.2014.
Huxlin, K. R., Martin, T., Kelly, K., Riley, M., Friedman, D. I., Burgin, W. S., & Hayhoe, M. (2009). Perceptual relearning of complex visual motion after V1 damage in humans. The Journal of Neuroscience, 29 (13), 3981–3991, https://doi.org/10.1523/jneurosci.4882-08.2009.
Ito, M., Westheimer, G., & Gilbert, C. D. (1998). Attention and perceptual learning modulate contextual influences on visual perception. Neuron, 20 (6), 1191–1197, https://doi.org/S0896-6273(00)80499-7.
Jeter, P. E., Dosher, B. A., Liu, S. H., & Lu, Z. L. (2010). Specificity of perceptual learning increases with increased training. Vision Research, 50 (19), 1928–1940, https://doi.org/10.1016/j.visres.2010.06.016.
Jeter, P. E., Dosher, B. A., Petrov, A., & Lu, Z. L. (2009). Task precision at transfer determines specificity of perceptual learning. Journal of Vision, 9 (3): 1, 1–13, https://doi.org/10.1167/9.3.1. [PubMed] [Article]
Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences, USA, 88 (11), 4966–4970.
Karni, A., & Sagi, D. (1993, September 16). The time course of learning a visual skill. Nature, 365 (6443), 250–252, https://doi.org/10.1038/365250a0.
Law, C.-T., & Gold, J. I. (2008). Neural correlates of perceptual learning in a sensory-motor but not a sensory cortical area. Nature Neuroscience, 11 (4), 505–513, https://doi.org/10.1038/nn2070.
Levi, D. M. (2005). Perceptual learning in adults with amblyopia: A reevaluation of critical periods in human vision. Developmental Psychobiology, 46 (3), 222–232, https://doi.org/10.1002/dev.20050.
Levi, D. M., & Li, R. W. (2009). Perceptual learning as a potential treatment for amblyopia: A mini-review. Vision Research, 49 (21), 2535–2549, https://doi.org/10.1016/j.visres.2009.02.010.
Li, H.-H., Barbot, A., & Carrasco, M. (2016). Saccade preparation reshapes sensory tuning. Current Biology, 26 (12), 1564–1570, https://doi.org/10.1016/j.cub.2016.04.028.
Li, W., Piech, V., & Gilbert, C. D. (2004). Perceptual learning and top-down influences in primary visual cortex. Nature Neuroscience, 7 (6), 651–657, https://doi.org/10.1038/nn1255 nn1255.
Li, X., Lu, Z.-L., Tjan, B. S., Dosher, B. A., & Chu, W. (2008). Blood oxygenation level-dependent contrast response functions identify mechanisms of covert attention in early visual areas. Proceedings of the National Academy of Sciences, 105 (16), 6202–6207, https://doi.org/10.1073/pnas.0801390105.
Ling, S., & Carrasco, M. (2006). Sustained and transient covert attention enhance the signal via different contrast response functions. Vision Research, 46 (8-9), 1210–1220.
Liu, Z., & Weinshall, D. (2000). Mechanisms of generalization in perceptual learning. Vision Research, 40 (1), 97–109, https://doi.org/S0042-6989(99)00140-6.
Lu, Z. L., Liu, J., & Dosher, B. A. (2009). Modeling mechanisms of perceptual learning with augmented Hebbian re-weighting. Vision Research, https://doi.org/S0042-6989(09)00397-6 [pii] 10.1016/j.visres.2009.08.027.
Maniglia, M., & Seitz, A. R. (2018). Towards a whole brain model of perceptual learning. Current Opinion in Behavioral Sciences, 20, 47–55, https://doi.org/10.1016/j.cobeha.2017.10.004.
Martinez-Trujillo, J., & Treue, S. (2002). Attentional modulation strength in cortical area MT depends on stimulus contrast. Neuron, 35 (2), 365–370.
Meuwese, J. D., Post, R. A., Scholte, H. S., & Lamme, V. A. (2013). Does perceptual learning require consciousness or attention? Journal of Cognitive Neuroscience, 25 (10), 1579–1596.
Montagnini, A., & Castet, E. (2007). Spatiotemporal dynamics of visual attention during saccade preparation: Independence and coupling between attention and movement planning. Journal of Vision, 7 (14): 8, 1–16, https://doi.org/10.1167/7.14.8. [PubMed] [Article]
Morrone, M. C., Denti, V., & Spinelli, D. (2004). Different attentional resources modulate the gain mechanisms for color and luminance contrast. Vision Research, 44 (12), 1389–1401. https://doi.org/10.1016/j.visres.2003.10.014.
Mukai, I., Bahadur, K., Kesavabhotla, K., & Ungerleider, L. G. (2011). Exogenous and endogenous attention during perceptual learning differentially affect post-training target thresholds. Journal of Vision, 11 (1): 25, 1–15, https://doi.org/10.1167/11.1.25. [PubMed] [Article]
Mukai, I., Kim, D., Fukunaga, M., Japee, S., Marrett, S., & Ungerleider, L. G. (2007). Activations in visual and attention-related areas predict and correlate with the degree of perceptual learning. The Journal of Neuroscience, 27 (42), 11401–11411, https://doi.org/10.1523/jneurosci.3002-07.2007.
Nahum, M., Nelken, I., & Ahissar, M. (2009). Stimulus uncertainty and perceptual learning: Similar principles govern auditory and visual learning. Vision Research, https://doi.org/10.1016/j.visres.2009.09.004.
Ohl, S., Kuper, C., & Rolfs, M. (2017). Selective enhancement of orientation tuning before saccades. Journal of Vision, 17 (13): 2, 1–11, https://doi.org/10.1167/17.13.2. [PubMed] [Article]
Paffen, C. L., Verstraten, F. A., & Vidnyánszky, Z. (2008). Attention-based perceptual learning increases binocular rivalry suppression of irrelevant visual features. Journal of Vision, 8 (4): 25, 1–11, https://doi.org/10.1167/8.4.25. [PubMed] [Article]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442.
Pestilli, F., & Carrasco, M. (2005). Attention enhances contrast sensitivity at cued and impairs it at uncued locations. Vision Research, 45 (14), 1867–1875, https://doi.org/10.1016/j.visres.2005.01.019.
Pestilli, F., Ling, S., & Carrasco, M. (2009). A population-coding model of attention's influence on contrast response: Estimating neural effects from psychophysical data. Vision Research, 49 (10), 1144–1153, https://doi.org/10.1016/j.visres.2008.09.018.
Pestilli, F., Viera, G., & Carrasco, M. (2007). How do attention and adaptation affect contrast sensitivity? Journal of Vision, 7 (7): 9, 1–12, https://doi.org/10.1167/7.7.9. [PubMed] [Article]
Petrov, A. A., Dosher, B. A., & Lu, Z. L. (2005). The dynamics of perceptual learning: An incremental reweighting model. Psychological Review, 112 (4), 715–743, https://doi.org/10.1037/0033-295X.112.4.715.
Polat, U., Ma-Naim, T., Belkin, M., & Sagi, D. (2004). Improving vision in adult amblyopia by perceptual learning. Proceedings of the National Academy of Sciences, USA, 101 (17), 6692–6697, https://doi.org/10.1073/pnas.0401200101.
Polat, U., Ma-Naim, T., & Spierer, A. (2009). Treatment of children with amblyopia by perceptual learning. Vision Research, 49 (21), 2599–2603, https://doi.org/10.1016/j.visres.2009.07.008.
Polat, U., Schor, C., Tong, J.-L., Zomet, A., Lev, M., Yehezkel, O.,… Levi, D. M. (2012). Training the brain to overcome the effect of aging on the human eye. Scientific Reports, 2, 278, https://doi.org/10.1038/srep00278.
Pouget, A., & Bavelier, D. (2007). Paying attention to neurons with discriminating taste. Neuron, 53 (4), 473–475, https://doi.org/10.1016/j.neuron.2007.02.004.
Reynolds, J. H., & Chelazzi, L. (2004). Attentional modulation of visual processing. Annual Review of Neuroscience, 27 (1), 611–647, https://doi.org/10.1146/annurev.neuro.26.041002.131039.
Reynolds, J. H., & Heeger, D. J. (2009). The normalization model of attention. Neuron, 61 (2), 168–185, https://doi.org/10.1016/j.neuron.2009.01.002.
Roelfsema, P. R., & van Ooyen, A. (2005). Attention-gated reinforcement learning of internal representations for classification. Neural Computation, 17 (10), 2176–2214, https://doi.org/10.1162/0899766054615699.
Roelfsema, P. R., van Ooyen, A., & Watanabe, T. (2010). Perceptual learning rules based on reinforcers and attention. Trends in Cognitive Sciences, 14 (2), 64–71, https://doi.org/10.1016/j.tics.2009.11.005.
Rolfs, M., & Carrasco, M. (2012). Rapid simultaneous enhancement of visual sensitivity and perceived contrast during saccade preparation. The Journal of Neuroscience, 32 (40), 13744–13752a, https://doi.org/10.1523/jneurosci.2676-12.2012.
Rolfs, M., Murray-Smith, N., & Carrasco, M. (2018). Perceptual learning while preparing saccades. Vision Research, https://doi.org/10.1016/j.visres.2017.11.009.
Sagi, D. (2011). Perceptual learning in vision research. Vision Research, 51 (13), 1552–1566, https://doi.org/10.1016/j.visres.2010.10.019.
Sahraie, A., Trevethan, C. T., MacLeod, M. J., Murray, A. D., Olson, J. A., & Weiskrantz, L. (2006). Increased sensitivity after repeated stimulation of residual spatial channels in blindsight. Proceedings of the National Academy of Sciences, USA, 103 (40), 14971–14976, https://doi.org/10.1073/pnas.0607073103.
Sasaki, Y., Nañez, J. E., & Watanabe, T. (2010). Advances in visual perceptual learning and plasticity. Nature Reviews Neuroscience, 11 (1), 53–60, https://doi.org/10.1038/nrn2737.
Sasaki, Y., Náñez, J. E., & Watanabe, T. (2012). Recent progress in perceptual learning research. Wiley Interdisciplinary Reviews: Cognitive Science, 3 (3), 293–299.
Schoups, A. A., Vogels, R., & Orban, G. A. (1995). Human perceptual learning in identifying the oblique orientation: Retinotopy, orientation specificity and monocularity. The Journal of Physiology, 483 (Pt 3), 797–810.
Seitz, A., & Watanabe, T. (2005). A unified model for perceptual learning. Trends in Cognitive Sciences, 9 (7), 329–334, https://doi.org/10.1016/j.tics.2005.05.010.
Seitz, A. R. (2017). Perceptual learning. Current Biology, 27 (13), R631–R636, https://doi.org/10.1016/j.cub.2017.05.053.
Seitz, A. R., Kim, D., & Watanabe, T. (2009). Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61 (5), 700–707, https://doi.org/10.1016/j.neuron.2009.01.016.
Shiu, L. P., & Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception and Psychophysics, 52, 582–588.
Sokal, R. R., & Rohlf, J. F. (1981). Biometry: The principles and practice of statistics in biological research (2nd ed.). San Francisco: W.H. Freeman.
Sowden, P. T., Rose, D., & Davies, I. R. (2002). Perceptual learning of luminance contrast detection: Specific for spatial frequency and retinal location but not orientation. Vision Research, 42 (10), 1249–1258, https://doi.org/S0042698902000196.
Stephan, K. E., Penny, W. D., Daunizeau, J., Moran, R. J., & Friston, K. J. (2009). Bayesian model selection for group studies. NeuroImage, 46 (4), 1004–1017, https://doi.org/10.1016/j.neuroimage.2009.03.025.
Sterkin, A., Levy, Y., Pokroy, R., Lev, M., Levian, L., Doron, R.,… Polat, U. (2017). Vision improvement in pilots with presbyopia following perceptual learning. Vision Research, https://doi.org/10.1016/j.visres.2017.09.003.
Szpiro, S. F., & Carrasco, M. (2015). Exogenous attention enables perceptual learning. Psychological Science, 26 (12), 1854–1862, https://doi.org/10.1177/0956797615598976.
Szpiro, S. F. A., Spering, M., & Carrasco, M. (2014). Perceptual learning modifies untrained pursuit eye movements. Journal of Vision, 14 (8): 8, 1–13, https://doi.org/10.1167/14.8.8. [PubMed] [Article]
Szpiro, S. F. A., Wright, B. A., & Carrasco, M. (2014). Learning one task by interleaving practice with another task. Vision Research, 101, 118–124, https://doi.org/10.1016/j.visres.2014.06.004.
Tsushima, Y., Sasaki, Y., & Watanabe, T. (2006, December 15). Greater disruption due to failure of inhibitory control on an ambiguous distractor. Science, 314 (5806), 1786–1788, https://doi.org/10.1126/science.1133197.
Tsushima, Y., & Watanabe, T. (2009). Roles of attention in perceptual learning from perspectives of psychophysics and animal learning. Learning & Behavior, 37 (2), 126–132, https://doi.org/10.3758/LB.37.2.126.
Wang, R., Wang, J., Zhang, J.-Y., Xie, X.-Y., Yang, Y.-X., Luo, S.-H.,… Li, W. (2016). Perceptual learning at a conceptual level. The Journal of Neuroscience, 36 (7), 2238.
Wang, R., Zhang, J. Y., Klein, S. A., Levi, D. M., & Yu, C. (2012). Task relevancy and demand modulate double-training enabled transfer of perceptual learning. Vision Research, 61, 33–38, https://doi.org/10.1016/j.visres.2011.07.019.
Wang, R., Zhang, J. Y., Klein, S. A., Levi, D. M., & Yu, C. (2014). Vernier perceptual learning transfers to completely untrained retinal locations after double training: A “piggybacking” effect. Journal of Vision, 14 (13): 12, 1–10, https://doi.org/10.1167/14.13.12. [PubMed] [Article]
Watanabe, T., Nañez, J. E., & Sasaki, Y. (2001). Perceptual learning without perception. Nature, 413 (6858), 844–848, https://doi.org/10.1038/35101601.
Watanabe, T., Nañez, J. E.,Sr., Koyama, S., Mukai, I., Liederman, J., & Sasaki, Y. (2002). Greater plasticity in lower-level than higher-level visual motion processing in a passive perceptual learning task. Nature Neuroscience, 5 (10), 1003–1009, https://doi.org/10.1038/nn915 nn915.
Watanabe, T., & Sasaki, Y. (2015). Perceptual learning: Toward a comprehensive theory. Annual Review of Psychology, 66, 197–221, https://doi.org/10.1146/annurev-psych-010814-015214.
White, A. L., Lunau, R., & Carrasco, M. (2014). The attentional effects of single cues and color singletons on visual sensitivity. Journal of Experimental Psychology: Human Perception and Performance, 40 (2), 639–652, https://doi.org/10.1037/a0033775.
Wright, B. A., Sabin, A. T., Zhang, Y., Marrone, N., & Fitzgerald, M. B. (2010). Enhancing perceptual learning by combining practice with periods of additional sensory stimulation. The Journal of Neuroscience, 30 (38), 12868.
Xi, J., Jia, W.-L., Feng, L.-X., Lu, Z.-L., & Huang, C.-B. (2014). Perceptual learning improves stereoacuity in amblyopia. Investigative Ophthalmology & Visual Science, 55 (4), 2384–2391, https://doi.org/10.1167/iovs.13-12627.
Xiao, L. Q., Zhang, J. Y., Wang, R., Klein, S. A., Levi, D. M., & Yu, C. (2008). Complete transfer of perceptual learning across retinal locations enabled by double training. Current Biology, 18 (24), 1922–1926, https://doi.org/10.1016/j.cub.2008.10.030.
Xie, X.-Y., & Yu, C. (2017). Double training downshifts the threshold vs. noise contrast (TvC) functions with perceptual learning and transfer. Vision Research, https://doi.org/10.1016/j.visres.2017.12.004.
Yashar, A., & Carrasco, M. (2016). Rapid and long-lasting learning of feature binding. Cognition, 154, 130–138, https://doi.org/10.1016/j.cognition.2016.05.019.
Yashar, A., Chen, J., & Carrasco, M. (2015). Rapid and long-lasting reduction of crowding through training. Journal of Vision, 15 (10): 15, 1–15, https://doi.org/10.1167/15.10.15. [PubMed] [Article]
Yashar, A., & Denison, R. N. (2017). Feature reliability determines specificity and transfer of perceptual learning in orientation search. PLoS Computational Biology, 13 (12), e1005882, https://doi.org/10.1371/journal.pcbi.1005882.
Yotsumoto, Y., & Watanabe, T. (2008). Defining a link between perceptual learning and attention. PLoS Biology, 6 (8), e221, https://doi.org/10.1371/journal.pbio.0060221.
Yotsumoto, Y., Watanabe, T., & Sasaki, Y. (2008). Different dynamics of performance and brain activation in the time course of perceptual learning. Neuron, 57 (6), 827–833, https://doi.org/10.1016/j.neuron.2008.02.034.
Zhang, G.-L., Cong, L.-J., Song, Y., & Yu, C. (2013). ERP P1-N1 changes associated with Vernier perceptual learning and its location specificity and transfer. Journal of Vision, 13 (4): 19, 1–13, https://doi.org/10.1167/13.4.19. [PubMed] [Article]
Zhang, J.-Y., Cong, L.-J., Klein, S. A., Levi, D. M., & Yu, C. (2014). Perceptual learning improves adult amblyopic vision through rule-based cognitive compensation. Investigative Ophthalmology & Visual Science, 55 (4), 2020–2030.
Zhang, T., Xiao, L. Q., Klein, S. A., Levi, D. M., & Yu, C. (2010). Decoupling location specificity from perceptual learning of orientation discrimination. Vision Research, 50 (4), 368–374, https://doi.org/10.1016/j.visres.2009.08.024.
Footnotes
1  Except three participants who completed the study within 6 or 7 days.
Figure 1
 
Trial sequence. Participants fixated on a central cross. A precue was presented for 500 ms, and was either two lines pointing to the two possible target locations (neutral) or the location of the upcoming target (valid). After a 400-ms ISI, the target, a Gabor patch tilted either clockwise or counterclockwise, was presented for 60 ms. After a 300-ms delay, a response cue appeared for 300 ms, indicating the location at which the target appeared. Following response-cue offset, participants were given 900 ms to report the orientation of the target: clockwise or counterclockwise.
Figure 1
 
Trial sequence. Participants fixated on a central cross. A precue was presented for 500 ms, and was either two lines pointing to the two possible target locations (neutral) or the location of the upcoming target (valid). After a 400-ms ISI, the target, a Gabor patch tilted either clockwise or counterclockwise, was presented for 60 ms. After a 300-ms delay, a response cue appeared for 300 ms, indicating the location at which the target appeared. Following response-cue offset, participants were given 900 ms to report the orientation of the target: clockwise or counterclockwise.
Figure 2
 
Training and testing schedule. Participants were tested at four locations, two locations per block, before and after 3 days of training at two diagonal locations. Half of the participants were trained with all valid cues (attention group), and half had with neutral cues (neutral groups). All participants received only neutral cues on the pre- and posttests.
Figure 2
 
Training and testing schedule. Participants were tested at four locations, two locations per block, before and after 3 days of training at two diagonal locations. Half of the participants were trained with all valid cues (attention group), and half had with neutral cues (neutral groups). All participants received only neutral cues on the pre- and posttests.
Figure 3
 
Accuracy (percent correct) across sessions. Each data point is one block. Light colors correspond to the neutral group, and dark colors correspond to the attention group. Blue corresponds to trained locations, and red corresponds to untrained locations. Error bars indicate standard error of the mean. In the pre- and posttest, the location types alternated between blocks. For illustration purposes, trained location is plotted before untrained locations, but note that the order was randomized between subjects, and the order was the same on the pre- and posttest within subjects.
Figure 3
 
Accuracy (percent correct) across sessions. Each data point is one block. Light colors correspond to the neutral group, and dark colors correspond to the attention group. Blue corresponds to trained locations, and red corresponds to untrained locations. Error bars indicate standard error of the mean. In the pre- and posttest, the location types alternated between blocks. For illustration purposes, trained location is plotted before untrained locations, but note that the order was randomized between subjects, and the order was the same on the pre- and posttest within subjects.
Figure 4
 
Psychometric functions for bootstrapped data across subjects within the neutral and attention. Light-colored curves are pretest performance, while dark-colored curves are posttest performance. Blue curves are trained locations, while red curves are untrained locations. Shaded regions represent 95% bootstrapped confidence intervals for α and 1 − λ.
Figure 4
 
Psychometric functions for bootstrapped data across subjects within the neutral and attention. Light-colored curves are pretest performance, while dark-colored curves are posttest performance. Blue curves are trained locations, while red curves are untrained locations. Shaded regions represent 95% bootstrapped confidence intervals for α and 1 − λ.
Figure 5
 
Threshold values (α) for each group, location type, and session. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 5
 
Threshold values (α) for each group, location type, and session. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 6
 
Threshold values (α) for each group, location type, and session, relative to the pretest. Light colors correspond to the neutral group, while dark colors correspond to the attention group. Blue corresponds to trained locations, while red corresponds to untrained locations. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 6
 
Threshold values (α) for each group, location type, and session, relative to the pretest. Light colors correspond to the neutral group, while dark colors correspond to the attention group. Blue corresponds to trained locations, while red corresponds to untrained locations. Error bars indicate standard error of the mean. In the neutral group, threshold improvement was specific to trained locations. In the attention group, threshold improvement transferred to untrained locations.
Figure 7
 
Exceedance probabilities for each model tested via group-level Bayesian model comparison. Higher values indicate greater probability of the data given each model. Models are distinguished by which parameters of the Weibull function can vary between pre- and posttest fits at each location for each subject.
Figure 7
 
Exceedance probabilities for each model tested via group-level Bayesian model comparison. Higher values indicate greater probability of the data given each model. Models are distinguished by which parameters of the Weibull function can vary between pre- and posttest fits at each location for each subject.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×