June 2021
Volume 21, Issue 6
Open Access
Article  |   June 2021
Temporal attention selectively enhances target features
Author Affiliations
  • Luis D. Ramirez
    Graduate Program for Neuroscience, Boston University, Boston, MA, USA
    Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
    Center for Systems Neuroscience, Boston University, Boston, MA, USA
    luisdr@bu.edu
  • Joshua J. Foster
    Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
    Center for Systems Neuroscience, Boston University, Boston, MA, USA
    jjfoster@bu.edu
  • Sam Ling
    Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
    Center for Systems Neuroscience, Boston University, Boston, MA, USA
    samling@bu.edu
Journal of Vision June 2021, Vol.21, 6. doi:https://doi.org/10.1167/jov.21.6.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Luis D. Ramirez, Joshua J. Foster, Sam Ling; Temporal attention selectively enhances target features. Journal of Vision 2021;21(6):6. https://doi.org/10.1167/jov.21.6.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Temporal attention, the allocation of attention to a moment in time, improves perception. Here, we examined the computational mechanism by which temporal attention improves perception, under a divisive normalization framework. Under this framework, attention can improve perception of a target signal in three ways: stimulus enhancement (increasing gain across all sensory channels), signal enhancement (selectively increasing gain in channels that encode the target stimulus), or external noise exclusion (reducing the gain in channels that encode irrelevant features). These mechanisms make diverging predictions when a target is embedded in varying levels of noise: stimulus enhancement improves performance only when noise is low, signal enhancement improves performance at all noise intensities, and external noise exclusion improves performance only when noise is high. To date, temporal attention studies have used noise-free displays. Therefore, it is unclear whether temporal attention acts via stimulus enhancement (amplifying both target features and noise) or signal enhancement (selectively amplifying target features) because both mechanisms predict improved performance in the absence of noise. To tease these mechanisms apart, we manipulated temporal attention using an auditory cue while parametrically varying external noise in a fine-orientation discrimination task. Temporal attention improved perceptual thresholds across all noise levels. Formal model comparisons revealed that this cuing effect was best accounted for by a combination of signal enhancement and stimulus enhancement, suggesting that temporal attention improves perceptual performance, in part, by selectively increasing gain for target features.

Introduction
Our ability to appropriately respond to dynamic and often noisy environments involves the recruitment of temporal attention, the allocation of attention to a moment in time (Denison, Heeger, & Carrasco, 2017; Griffin, Miniussi, & Nobre, 2001; Lange, Krämer, & Röder, 2006; Milliken, Lupiáñez, Roberts, & Stevanovski, 2003; Nobre & Rohenkohl, 2014; Zokaei, Board, Manohar, & Nobre, 2019). A growing body of evidence has demonstrated that temporal attention improves perceptual detection and discriminability (Correa, Lupiáñez, & Tudela, 2005; Correa, Lupiáñez, Milliken, & Tudela, 2004; Coull, Frith, Büchel, & Nobre, 2000; Fernández, Denison, & Carrasco, 2019; Griffin, Miniussi, & Nobre, 2001; Rohenkohl, Cravo, Wyart, & Nobre, 2012), which is thought to be mediated by improvements in early visual processing (Correa, Lupiáñez, Madrid, & Tudela, 2006; Correa, Sanabria, Spence, Tudela, & Lupiáñez, 2006; Denison, Yuval-Greenberg, & Carrasco, 2019; Rolke & Hofmann, 2007). However, the computational mechanisms subserving these improvements in target detection and discriminability due to temporal attention remain unclear (Nobre & Rohenkohl, 2014; Nobre & Van Ede, 2018; Weinbach & Henik, 2012). 
Discriminating a target stimulus in noise is a classic signal detection problem, where performance is governed by the ratio between the intensity of the signal and the intensity of the noise, both in the environment and in the visual system itself (Pelli & Farell, 1999). Within this framework, attention might improve the signal-to-noise ratio in several ways (Lu & Dosher, 2008). First, attention could increase the gain of all visual features, amplifying both relevant signal and irrelevant noise via “stimulus enhancement” (Dosher & Lu, 2000b; Lu & Dosher, 1998). Second, attention could selectively increase the gain of the target signal, leaving any irrelevant noise untouched via “signal enhancement.” The terms “stimulus enhancement” and “signal enhancement” have been used interchangeably in past work to refer to what we call stimulus enhancement, a wholesale increase in gain that will amplify target features and noise (e.g., Dosher & Lu, 2000a; Ling & Carrasco, 2006; Lu & Dosher, 1998). Dosher and Lu (2000b) rightly noted that stimulus enhancement might be the better term when both the signal and noise are being modulated. In this paper, we follow their lead. Thus, we use stimulus enhancement to refer to a wholesale increase in gain, and signal enhancement to refer to an increase in gain for target features. Finally, attention could improve visual processing by suppressing irrelevant noise, thereby improving the signal-to-noise ratio via “external noise exclusion” (Dosher & Lu, 2000a; Dosher, Liu, Blair, & Lu, 2004). 
Stimulus enhancement, signal enhancement, and noise exclusion each have distinct signatures depending on the amount of noise present in a display (Figure 1). Notably, stimulus enhancement and signal enhancement both predict an improvement in perceptual sensitivity in the absence of noise, as has been reported in past studies of temporal attention (Correa, Lupiáñez, et al., 2006; Denison, Heeger, & Carrasco, 2017; Fernández, Denison, & Carrasco, 2019; Nobre, Correa, & Coull, 2007; Nobre & Van Ede, 2018; Shalev, Nobre, & van Ede, 2019). However, because temporal attention studies have typically used noise-free displays, it is unclear whether temporal attention improves perception solely via stimulus enhancement or signal enhancement, or some combination of the proposed mechanisms. In this study, we parametrically varied external noise to directly test which mechanism supports temporal attention. 
Figure 1.
 
Simulated Perceptual Thresholds Under Predicted Attention Mechanisms Generated from the Modified Normalization Model. Red curves represent perceptual thresholds across increasing levels of noise (0–35% RMS contrast) in the absence of attention. Blue curves represent perceptual thresholds across increasing levels of noise under attention. Note how each attentional mechanism distinctly improves signal contrast thresholds across increasing levels of noise. Stimulus enhancement evokes a threshold reduction primarily at low noise levels, signal enhancement evokes threshold reduction across all noise levels, whereas noise exclusion evokes threshold reduction primarily at high noise levels. Note that stimulus enhancement and signal enhancement cannot be distinguished in the absence of noise under this framework (gray rectangle).
Figure 1.
 
Simulated Perceptual Thresholds Under Predicted Attention Mechanisms Generated from the Modified Normalization Model. Red curves represent perceptual thresholds across increasing levels of noise (0–35% RMS contrast) in the absence of attention. Blue curves represent perceptual thresholds across increasing levels of noise under attention. Note how each attentional mechanism distinctly improves signal contrast thresholds across increasing levels of noise. Stimulus enhancement evokes a threshold reduction primarily at low noise levels, signal enhancement evokes threshold reduction across all noise levels, whereas noise exclusion evokes threshold reduction primarily at high noise levels. Note that stimulus enhancement and signal enhancement cannot be distinguished in the absence of noise under this framework (gray rectangle).
Several studies have manipulated external noise to test how spatial attention modulates perception (Dosher et al., 2004; Ling, Liu, & Carrasco, 2009; Lu & Dosher, 1998; Lu & Dosher, 2008; Pratte, Ling, Swisher, & Tong, 2013). These studies typically used a variant of signal detection models, such as the Perceptual Template Model (PTM), to model signal contrast thresholds as a function of external noise. However, whereas the PTM can dovetail nicely with behavioral data, it assumes that the effect of external noise is additive with the signal, which recent work has shown is not the case (Baker & Vilidaite, 2014; Baldwin, Baker, & Hess, 2016; Hansen & Hess, 2012). Instead, the effect of noise is better accounted for by the gain control mechanisms, where the signal and noise inhibit each other. This mutual suppression is thought to occur due to divisive normalization (Brouwer & Heeger, 2011; Carandini & Heeger, 2012; Freeman et al., 2002; Ling & Blake, 2012; Morrone, Burr, & Maffei, 1982). Therefore, we adopted a model in which the interaction between the signal and noise are governed by normalization. Under normalization models, the neural response to an item is determined by the balance between excitatory and inhibitory neural activity. Specifically, the neural response to a stimulus is regulated by its own response, as well as adjacent neural responses (Carandini & Heeger, 2012). This framework has long been deployed to account for interactions within visual cortex (Heeger, 1992) and has more recently been proposed to play a role in the modulatory effects of attention (Bloem & Ling, 2019; Ling & Blake, 2012; Reynolds & Heeger, 2009; Ruff & Cohen, 2017). Within this framework, attention can improve our ability to detect signals in noise by tipping the balance between neural excitation and inhibition. Our variant of the normalization model of attention integrates the predicted mechanisms of attention from perceptual template models to generate distinct, testable hypotheses for how temporal attention enhances perceptual sensitivity: stimulus enhancement, signal enhancement, and noise exclusion (see Figure 1). 
Under stimulus enhancement, attention boosts the neural representation of the stimulus in its entirety—both relevant target signal and irrelevant distractor noise. Thus, stimulus enhancement improves target discrimination primarily when external noise is low because this mechanism also amplifies noise. Under signal enhancement, attention solely boosts the target signal, thereby improving target discrimination even when the target is embedded in noise. A final possibility is that temporal attention elicits external noise exclusion—reducing the neural representation of noise, primarily when noise is high. However, on its own, noise exclusion cannot explain the finding that temporal attention improves performance in the absence of noise (Denison, Heeger, & Carrasco, 2017; Fernández, Denison, & Carrasco, 2019; Nobre, Correa, & Coull, 2007; Nobre & Van Ede, 2018; Shalev, Nobre, & van Ede, 2019). Nevertheless, we consider external noise exclusion in our model comparisons because temporal attention might evoke external noise exclusion in combination with signal enhancement. 
In this study, we combine the predicted mechanisms of attention from perceptual template models with the visual cortical interactions described under normalization models to test whether temporal cues improve visual sensitivity through stimulus enhancement, signal enhancement, noise exclusion, or a combination of mechanisms. Participants performed a fine-orientation discrimination task on a target grating that appeared randomly in time and was masked by white noise whose contrast was parametrically manipulated. In half the trials of this task, participants had no knowledge of the target grating's onset; this served as our uncued (unattended) condition. This was compared to our cued (attended) condition, where participants were provided an auditory cue that immediately preceded the target grating—providing precise temporal information about the target signal's impending onset and the moment in time a participant should attend. To assess the effect of the temporal cue on perception, we measured signal contrast thresholds under cued and uncued conditions across multiple contrast levels of the noise mask. We found that temporal attention boosts perceptual sensitivity across all noise levels. Moreover, in an additional experiment, we find that these effects are not solely explained by a reduction in temporal uncertainty from cuing. Taken together, our results suggest that temporal attention improves perception, in part, via signal enhancement, selectively enhancing processing of target features. 
Methods
Participants
Twelve healthy adult volunteers between ages 18 and 24 (7 women; age = 20.92 ± 1.14, mean ± standard error of the mean [SEM]) participated in the experiment. One subject was removed from data analyses due to perceptual thresholds being outliers and not increasing monotonically with noise contrast. An outlier was determined if the average signal contrast threshold, collapsed across noise contrast levels for each condition, was greater than 2.5 standard deviations from the mean. 
All participants had normal or corrected-to-normal vision. A minimum sample size of 10 was chosen comparable to other studies that have utilized a similar masking paradigm (Dosher & Lu, 2000a; Dosher et al., 2004; Lu & Dosher, 1998; Lu & Dosher, 2000). Additionally, we ran six participants in a control experiment (see below for more information) composed of three subjects from the main experiment and three newly recruited subjects (3 women; age = 27.66 ± 2.11, mean ± SEM). For two of the newly recruited subjects in this control experiment, we collected data across the 10 external noise levels used in the main experiment in the orientation discrimination task portion of this control experiment and thus report their signal contrast thresholds in the model fitting results from the main experiment (these two subjects are counted in the sample size of 12 for the main experiment). All participants involved provided written consent and were reimbursed for their time. The Boston University Institutional Review Board approved the study. 
Apparatus and stimuli
Stimuli were generated using MATLAB 2017a (The Math Works Inc., 2007) in conjunction with the Psychophysics Toolbox (Brainard, 1997), rendered on a Mac Mini running Ubuntu 16.04 LTS. Stimuli were presented on a gamma-corrected CRT monitor (1280- × 1024-pixel resolution; 75 Hz refresh rate), with no additional light sources in the room. Participants were seated comfortably with their heads in a chin rest at a viewing distance of 57 cm from the screen. The background of the display was uniform gray (luminance = 49 cd/m2). 
Task procedure
Participants performed a fine-orientation discrimination task in which they reported the tilt of a target grating (spatial frequency = 6 cycles/degrees, fixed spatial phase, diameter = 4 degrees, orientation = ± 2 degrees from vertical axis) embedded in a dynamic Gaussian white noise mask (diameter = 4 degrees, changing at 10 Hz; subtending 0.2 degrees in diameter; Figure 2a). We parametrically manipulated the contrast of this noise mask from trial to trial, selected to be one of 10 noise contrast levels evenly spaced on a log scale between 0% and 34.66% root mean square (RMS) contrast (Figure 2b). Each trial began with the onset of dynamic Gaussian white noise mask at fixation. The noise mask was present for the full duration of the trial (jittered between 4.5 and 4.7 seconds). Participants were instructed to maintain steady fixation through each trial. The target grating was presented for 100 ms within the noise mask. The target grating could appear at the following timepoints within a trial: 1 second, 1.6 seconds, 2.8 seconds, or 4.0 seconds. Importantly, participants had no knowledge of how these timepoints were generated or the number of possible timepoints the target grating could appear at. In half of the trials, participants were presented an auditory temporal cue that immediately preceded the target grating (cued trials). This temporal cue was a 100% valid auditory cue that swept from 262 Hz (C4) to 880 Hz (A5) in 500 ms, providing time to deploy attention to the moment in time the target grating appears. In the other half of trials, no auditory cue was presented (uncued trials). In both conditions, participants reported whether the target was tilted clockwise or counterclockwise from vertical, following target offset. To emphasize accuracy over response time, participants had no time limit in the response window, therefore trials did not proceed until a response was recorded. Feedback was provided at the end of each trial for 250 ms, followed by a 750-ms inter-trial interval. Feedback consisted of a change in color of the fixation dot from white if the response was correct (green), incorrect (red), or a wrong key press (grey). 
Figure 2.
 
Masking Paradigm and Experimental Design. (A) Trial sequence: Each trial began with fixation, followed by the onset of the mask display after 500 ms, which remained on for the entirety of the trial (roughly 4500 ms with jitter). Within a trial, the target grating randomly appeared for 100 ms. In half the trials, this target was immediately preceded by an auditory cue (500 ms), providing precise temporal information about the target's onset. Subjects reported the target grating's orientation after target offset and were provided feedback at the end of each trial, which was followed by an inter-trial interval (ITI) preceding the next trial. (B) Examples of experimental stimuli: target gratings masked in six of ten levels of external noise (Gaussian white noise) used throughout this study.
Figure 2.
 
Masking Paradigm and Experimental Design. (A) Trial sequence: Each trial began with fixation, followed by the onset of the mask display after 500 ms, which remained on for the entirety of the trial (roughly 4500 ms with jitter). Within a trial, the target grating randomly appeared for 100 ms. In half the trials, this target was immediately preceded by an auditory cue (500 ms), providing precise temporal information about the target's onset. Subjects reported the target grating's orientation after target offset and were provided feedback at the end of each trial, which was followed by an inter-trial interval (ITI) preceding the next trial. (B) Examples of experimental stimuli: target gratings masked in six of ten levels of external noise (Gaussian white noise) used throughout this study.
Prior to the main blocks of the task, participants completed a training block that contained all conditions randomly interleaved (2 attention conditions × 10 noise mask levels × 2 target orientations). We included this training block to ensure that participants were familiar with the timing of events in a trial for both cued and uncued conditions. Participants were informed before training that the auditory cue was 100% valid and immediately preceded the target grating. 
Participants completed 2 to 3 sessions of the task in total, where each session consisted of 800 trials (40 trials per condition). Trials from each condition were interleaved with their order randomized in each experimental session. A break was provided every 40 trials. We used an adaptive staircasing procedure, QUEST (Watson & Pelli, 1983), to estimate contrast thresholds for discriminating the target grating's orientation in each noise mask level and attentional condition. This resulted in a total of 20 independent staircases (2 attentional conditions × 10 noise contrast levels) set to a performance level of 70% accuracy (d′ = 0.74). Additionally, all staircases operated continuously across sessions, each receiving 40 trials in each session. If any staircase had not converged by the end of a session (operationalized as the standard deviation of the threshold distribution being above 0.1), the subject completed an additional session until all staircases met our criterion for convergence. Most subjects completed 2 to 3 sessions to satisfy this criterion, resulting in 80 or 120 trials per condition for each subject. 
Model fitting procedure
To determine which attention mechanism best characterized the observed attention effect, we first fit the reduced normalization model (Equation 1) to each subject's signal contrast thresholds from the uncued condition. This model is essentially a modified Naka-Rushton:  
\begin{equation}d^{\prime} = d{{\rm{^{\prime}}}_{max}} \times \left( {\frac{{c_S^n}}{{c_S^n + c_N^n + c_{50}^n}}} \right)\end{equation}
(1)
where d′ represents discriminability or perceptual sensitivity; dmax, maximum perceptual sensitivity; cS, contrast of the signal (the target grating); cN, contrast of the noise mask; c50, semi-saturation point; and n, dynamic range or a nonlinear transducer. The parameters that represent attention mechanisms—stimulus enhancement, signal enhancement, and external noise exclusion—are excluded from this reduced model to establish a baseline in the absence of attention. Solving for the observer's signal contrast threshold in this reduced model generates predicted threshold versus contrast curves (Equation 2; Blakemore & Campbell, 1969).  
\begin{equation} {c_S} = {\left( {\frac{{d{\rm{^{\prime}\;}} \times {\rm{\;}}\left( {c_N^n + c_{50}^n} \right)}}{{d{{\rm{^{\prime}}}_{max}} - d{\rm{^{\prime}}}}}} \right)^{ {1 / n}}} \end{equation}
(2)
 
Using nonlinear regression, we fit each subject's signal contrast thresholds in the uncued condition with this reduced model. Initial parameter values for dmax, c50, and n were chosen based on a series of grid searches for the most optimal initial parameter values that generated the lowest sum of squared errors, then estimated using the fmincon function in MATLAB. Next, we fit variants of the modified normalization model to the measured signal contrast thresholds from the cued condition. Each variant of the model allowed a different attentional coefficient, or combination of attentional coefficients, to vary while fixing dmax, c50, n to the estimated values from the reduced (baseline) normalization model. The full normalization model, including all attention mechanisms, is expressed as follows (Equation 3):  
\begin{equation}\!\!\!\begin{array}{l}d^{\prime} = d{{\rm{^{\prime}}}_{max}} \times \left( {\frac{{{A_{St}} \times {A_S} \times c_S^n}}{{{A_{St}} \times {A_S} \times c_S^n + {A_{St}} \times {A_N} \times c_N^n + c_{50}^n}}} \right)\end{array}\end{equation}
(3)
ASt is the stimulus enhancement coefficient, acting on both the signal, cS, and noise, cN. AS is the signal enhancement coefficient, acting solely on the signal. Finally, AN is the noise exclusion coefficient, acting strictly on the external noise. All attention coefficients were constrained to be between values of 0 and 5, where a value of 0 produces a complete suppression of the response to a stimulus component (signal = cS, or noise = cN depending on the attention coefficient), a value of one produces no attentional modulation compared to the reduced model, and values greater than one produce attentional modulation that enhances a stimulus component. Solving for signal contrast thresholds results in the following expression (Equation 4):  
\begin{equation}{c_S} = {\left( {\frac{{d{\rm{^{\prime}\;}} \times {\rm{\;}}\left( {{A_{St}} \times {A_N}{\rm{\;}} \times c_N^n + c_{50}^n} \right)}}{{{A_{St}} \times {A_S} \times \left( {d{{\rm{^{\prime}}}_{max}} - d{\rm{^{\prime}}}} \right)}}} \right)^{ {1 / n}}} \end{equation}
(4)
 
Each attention mechanism and combination of attention mechanisms were accounted for, resulting in a total of six additional variations of the modified normalization model to fit to the cued condition's data to for each subject, using the fmincon function in MATLAB. 
To evaluate which mechanisms could most parsimoniously account for our data, we used a corrected version of the Akaike Information Criterion (AICc; Akaike, 1974; Cavanaugh, 1997). This metric accounts for the number of observations and free parameters in a model to estimate the relative amount of information loss. The lower the AICc value, the better a given model explains the data. If we compute the difference between all AICc values and the minimum AICc value from each variant of the model, for each subject, we expect that the better a model, the closer to zero the difference will be on average. 
Control experiment
We conducted a control experiment to test whether the cuing effect could be explained by decreased temporal uncertainty about when the target grating appeared. Participants (n = 6) performed a detection task, in which they reported the presence or absence of a grating. The stimulus parameters and sequence of trial events in the detection task were identical to the orientation discrimination task. The probability of whether the target grating was present or absent on a given trial was drawn from a uniform discrete distribution. As in the orientation discrimination task, the auditory cue was present in half the trials. Participants performed this task across five of the 10 external noise contrasts used in the main experiment (0%, 1.44%, 4.76%, 10.53%, and 34.66% RMS contrast), which spanned the full range of noise contrasts used in that experiment. The contrast of the target grating was set to each participant's signal contrast thresholds obtained in the orientation discrimination task. Of the six participants, three participants took part in the main experiment. For these participants, signal contrast thresholds were obtained in the main experiment. The remaining participants completed sessions of the fine-orientation discrimination task used in the main experiment until the standard deviation of the signal contrast threshold distribution for each staircase was below 0.1. For two of the six subjects in this control experiment, we collected data across the original 10 external noise levels in the orientation discrimination task portion of this control experiment and thus include their signal contrast thresholds in the model fitting procedure for the main study. 
All six subjects in the control experiment completed at least two sessions of the fine-orientation discrimination task. One subject completed three sessions of the fine-orientation discrimination task because their staircases had not yet converged after the second session, meaning the standard deviation of the signal contrast threshold distribution for each staircase was not yet below 0.1 after the second session. 
Results
We found that signal contrast thresholds were lower in the cued condition than in the uncued condition across all levels of the noise mask contrast (Figure 3a). Figure 3b shows the percent increase in signal contrast thresholds between the cued and uncued conditions across noise mask contrast levels. 
Figure 3.
 
Average Perceptual Thresholds and Cuing Effect in the Fine-orientation Discrimination Task. (A) Average perceptual thresholds across increasing levels of noise and temporal cue presence (N = 11). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. (B) Average improvement in contrast sensitivity between attentional conditions, expressed as a percent increase between the cued and uncued condition. Error bars represent SEM.
Figure 3.
 
Average Perceptual Thresholds and Cuing Effect in the Fine-orientation Discrimination Task. (A) Average perceptual thresholds across increasing levels of noise and temporal cue presence (N = 11). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. (B) Average improvement in contrast sensitivity between attentional conditions, expressed as a percent increase between the cued and uncued condition. Error bars represent SEM.
To test which mechanism of attention best accounted for the temporal cuing effect across noise levels, we fit a family of normalization models to signal contrast thresholds for each subject (see Methods, model fitting procedure). We found that the reduced model (with all attention coefficients set to 1), fit nicely to data from uncued trials within and across subjects (min, max R2 = 0.5726, 0.9366; average R2 = 0.839 ± 0.036; c50 = 0.063 ± 0.016; n = 1.775 ± 0.494; \({\rm{d}}_{{\rm{max}}}^{\rm{^{\prime}}}\) = 3.405 ± 0.333; mean ± SEM). Unsurprisingly, the baseline (reduced) model fit the cued data poorly across subjects (average R2 = 0.371 ± 0.101; mean ± SEM), suggesting that an absence of attention mechanisms is insufficient for explaining the cued condition's data across subjects (Figure 4). 
The average ΔAICc across all subjects revealed that a combination of stimulus enhancement and signal enhancement is the winning model on average (Figure 5), followed closely by signal enhancement alone (ΔAICc values: AST and AS = 3.207 ± 0.787; AS = 3.308 ± 1.589; AST and AN = 6.393 ± 1.770; AS and AN = 6.3939 ± 1.770; AST = 6.658 ± 2.961; AN = 11.984 ± 2.422; baseline = 12.083 ± 2.853; mean ± SEM). Thus, our results suggest that stimulus enhancement alone does not account for the data best. Instead, our results demonstrate that temporal attention recruits signal enhancement in addition to stimulus enhancement. 
Figure 4.
 
Normalization Model Fitting Results. Each plot represents an individual subject's signal contrast threshold data for each noise contrast level (black and gray dots represent cued and uncued data, respectively) and each variant of the modified normalization model of attention fit to the data from the cued condition (colored lines). Solid lines in each plot represent the winning model according to the lowest ΔAICc value for that subject. Baseline is an absence of attention coefficients/mechanisms fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. X-axis labels and tick values are identical across all subplots. Subplot titles are color-coded to match the individual model comparison results presented in Figure 5.
Figure 4.
 
Normalization Model Fitting Results. Each plot represents an individual subject's signal contrast threshold data for each noise contrast level (black and gray dots represent cued and uncued data, respectively) and each variant of the modified normalization model of attention fit to the data from the cued condition (colored lines). Solid lines in each plot represent the winning model according to the lowest ΔAICc value for that subject. Baseline is an absence of attention coefficients/mechanisms fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. X-axis labels and tick values are identical across all subplots. Subplot titles are color-coded to match the individual model comparison results presented in Figure 5.
Is the temporal cuing effect explained by a reduction in temporal uncertainty?
Our results suggest that temporal attention improves fine-orientation discrimination through a combination of stimulus enhancement and signal enhancement. However, another possibility is that the temporal cue improved performance by reducing uncertainty about the moment at which the target grating appeared (Pelli, 1985). Because the temporal cue in the cued condition perfectly predicted when the grating would appear, the cue may have instead improved performance by enabling participants to disregard irrelevant moments in time. Indeed, some attentional benefits have been shown to be attributed to a reduction in uncertainty, particularly with spatial attention (e.g. Gould, Wolfgang, & Smith, 2007; Solomon, Lavie, & Morgan, 1997). In a control experiment, we aimed to test whether there was significant temporal uncertainty in our main experiment, wherein participants were sometimes confusing the noise for the signal, as the temporal uncertainty model posits. Our main experiment involved a very fine 2AFC orientation discrimination task (±2 degrees from the vertical axis), from which we assessed signal contrast thresholds. We reasoned that due to the difficulty of the discrimination task, the stimuli were rendered at contrast thresholds that were all quite readily visible in the main experiment. To test this uncertainty reduction account, we asked participants to perform a detection task at the signal contrasts that the stimulus was presented at in our fine orientation discrimination task, which would help understand whether these stimuli were: (1) truly sometimes confused for the noise, and (2) whether the cue reduced their uncertainty. If there was substantial temporal uncertainty, then detectability of the targets would be poor, and the cue should improve detection performance (Carrasco, Penpeci-Talgar, & Eckstein, 2000). 
Six participants (3 from the main study and 3 additional participants) completed the detection task (see Methods, Participants). As in the main study, we found that signal contrast thresholds increase with noise contrast (Figure 6b), F(4,50) = 40.43, p < 0.001, and were lower in the cued condition than in the uncued condition, F(1,50) = 9.62, p = 0.0032. Furthermore, there was no interaction between noise level and cue condition, F(4,50) = 0.13, p = 0.9722, such that the size of the cuing effect did not scale with noise contrast, as was the case in our main experiment. 
In the detection task, we fixed the contrast of the grating in each condition to the signal contrast thresholds estimated in the fine orientation discrimination task for each participant. Importantly, we found that detection accuracy (Figure 6a) was high in both the cued and uncued conditions (cued = 93.14% ± 2.25%; uncued = 92.67% ± 2.03%). We conducted a factorial ANOVA to test for main effects of cue (temporal uncertainty) on accuracy. Critically, we found that the temporal cue did not improve detection performance (main effect of cue: F(1,110) = 0.12, p = 0.7337; main effect of noise: F(4,110) = 0.36, p = 0.8383; interaction: F(4,110) = 0.93, p = 0.4504). These results show that the target gratings were easily detected, and the temporal cue did not improve detection performance, suggesting that temporal uncertainty did not entirely drive the observed cuing effect in the discrimination task. 
Discussion
To date, it has remained unclear whether temporal attention increases gain for all aspects of a stimulus (stimulus enhancement) or selectively increases gain for target features (signal enhancement) to improve perception (Denison, Heeger, & Carrasco, 2017; Fernández, Denison, & Carrasco, 2019; Nobre, Correa, & Coull, 2007; Nobre & Rohenkohl, 2014). In this study, we parametrically varied the contrast of a noise mask—an equivalent noise approach previously used to investigate mechanisms of spatial attention (Dosher & Lu, 2000b; Lu & Dosher, 1998; Lu & Dosher, 2000; Lu & Dosher, 2005)—to tease apart the mechanisms of temporal attention, under a normalization framework. We found that a temporal cue reduced signal contrast thresholds in an orientation discrimination task across all levels of external noise (see Figure 3). Our modeling results revealed that this effect was best described by a combination of signal enhancement and stimulus enhancement, with signal enhancement alone achieving a similar result (see Figure 5). Therefore, our results provide evidence against the possibility that temporal attention improves perception solely by increasing visual gain in a non-selective manner (i.e. stimulus enhancement). Instead, our results suggest that temporal attention selectively increases gain for a target feature in addition to increasing gain in general (signal enhancement and stimulus enhancement). 
Figure 5.
 
Model Comparison Results. Average fitting results for each variant of the normalization model (N = 11). Individual subject points are jittered horizontally for better visualization. Baseline is the normalization model with an absence of attention coefficients (mechanisms) fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. A combination of stimulus enhancement and signal enhancement had the lowest ΔAICc value on average, while signal enhancement alone closely tailed this result. Error bars represent SEM.
Figure 5.
 
Model Comparison Results. Average fitting results for each variant of the normalization model (N = 11). Individual subject points are jittered horizontally for better visualization. Baseline is the normalization model with an absence of attention coefficients (mechanisms) fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. A combination of stimulus enhancement and signal enhancement had the lowest ΔAICc value on average, while signal enhancement alone closely tailed this result. Error bars represent SEM.
Signal enhancement implies that observers are able to “select” the relevant signal without also boosting noise. The degree to which this selection is possible depends on how separable the signal and noise are in feature space. If the signal and noise are very similar, such that they activate the same sensory channels, then signal enhancement will also increase the gain of the noise, generating little-to-no benefits to discriminability when noise is high. In this case, signal enhancement becomes stimulus enhancement. However, if the signal and the noise activate neural populations that have little overlap, then signal enhancement can boost the signal representation, with little-to-no boost in the noise representation. In our experiment, the signal was defined by the orientations and spatial frequency of the target gratings. Although our broadband noise masks certainly contributed energy to the sensory channels tuned to the target gratings, the broadband noise mask will have relatively little energy in the target channels, and primarily impairs discriminability through cross-channel suppression (Baker & Vilidaite, 2014). We speculate that if signal and noise were more separable in our experiment (e.g. if we filtered out the spatial frequency of the target grating from the noise), perhaps a pure signal enhancement mechanism would have dominated our modeling results. Alternatively, the combination of signal enhancement and stimulus enhancement that we observed may be indicative of noise in the sensory channels tuned to the signal being enhanced in tandem with the signal, a potential by-product of signal enhancement under this framework, given that noise did contribute some energy to the target channels. 
An additional possibility as to why we observed a combination of signal enhancement and stimulus enhancement may be that our temporal cue engaged multiple processes. We manipulated temporal attention using a temporal orienting auditory cue that swept in pitch over 500 ms, preceding the target stimulus. Temporal orienting cues are commonly used in studies of temporal attention (Correa et al., 2004; Correa, Lupiáñez, et al., 2006; Coull et al., 2000; Denison, Heeger, & Carrasco, 2017; Denison, Yuval-Greenberg, & Carrasco, 2019; Fernández, Denison, & Carrasco, 2019; Griffin, Miniussi, & Nobre, 2001; Nobre, 2001). However, whereas temporal orienting cues allow observers to voluntarily deploy endogenous temporal attention, our cue, which appeared at a random moment in time for observers, may have also triggered a reflexive increase in alertness or arousal (Weinbach & Henik, 2012). Recent work has begun to tease apart the effects of endogenous and exogenous (i.e. reflexive) temporal attention (Lawrence & Klein, 2013), with some studies suggesting that endogenous and exogenous temporal attention have dissociable effects on perception (McCormick, Redden, Lawrence, & Klein, 2018; Rohenkohl, Coull, & Nobre, 2011). Although we speculate that our temporal cue engaged both endogenous and exogenous temporal attention, further work is needed to test whether signal enhancement and stimulus enhancement effects are specifically linked with endogenous and exogenous orienting of temporal attention, respectively. 
In a control experiment, we considered whether our temporal cuing effect could be explained by a reduction in temporal uncertainty, such that observers were better able to exclude irrelevant moments in time from their decisions in the cued condition than in the uncued condition. We reasoned that if participants were uncertain about when the target grating appeared, a temporal cue would improve performance in a detection task (Carrasco, Penpeci-Talgar, & Eckstein, 2000). Therefore, we asked observers to report the presence or absence of a target grating across various levels of external noise. For each level of external noise, the target grating contrast was set to the signal contrast threshold measured in the discrimination task. We found that target detection accuracy was high across all levels of noise, and that our temporal cue did not improve target detection performance (see Figure 6). In other words, the suprathreshold target gratings were readily detected in both the cued and uncued conditions, suggesting observers were readily certain about when the target grating appeared in the detection task and presumably in the orientation discrimination task. Nonetheless, we acknowledge a potential flaw: it is possible that different decision strategies are utilized for detection and discrimination tasks (Solomon, 2002), making the link between our detection task and discrimination task results nontrivial. 
Figure 6.
 
Cuing Did Not Improve Detection of Target Gratings. (A) Average accuracy (n = 6) in the detection task across noise mask levels and cue conditions. There was no effect of the cue on detectability, across all noise levels, ruling out an uncertainty account for our results. (B) Average perceptual thresholds across increasing levels of noise for each attentional condition in the fine-orientation discrimination task (n = 6). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. Error bars represent SEM.
Figure 6.
 
Cuing Did Not Improve Detection of Target Gratings. (A) Average accuracy (n = 6) in the detection task across noise mask levels and cue conditions. There was no effect of the cue on detectability, across all noise levels, ruling out an uncertainty account for our results. (B) Average perceptual thresholds across increasing levels of noise for each attentional condition in the fine-orientation discrimination task (n = 6). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. Error bars represent SEM.
In conclusion, we used a masking paradigm and a normalization framework to test what mechanisms support temporal attention. Under this framework, temporal attention can improve visual sensitivity through stimulus enhancement—amplifying everything attention is directed toward; signal enhancement—selectively enhancing just the signal and leaving irrelevant noise untouched; or external noise exclusion—leaving the signal untouched and actively suppressing irrelevant noise. Because previous studies of temporal attention have not manipulated external noise, it has remained unclear whether temporal attention increases gain for all aspects of a stimulus via stimulus enhancement or selectively increases gain for target features via signal enhancement, to improve perception. Here, we found that temporal attention recruits both signal enhancement and stimulus enhancement, such that temporal attention selectively enhances the processing of target features. 
Acknowledgments
The authors thank Rachel Denison, her laboratory, and the Ling Lab for their invaluable feedback. 
Funded by National Institutes of Health Grant EY028163 to S. Ling. 
Commercial relationships: none. 
Corresponding authors: Luis D. Ramirez, Sam Ling. 
Emails: luisdr@bu.edu, samling@bu.edu. 
Address: 677 Beacon St. Room 315, Boston, MA 02215, USA. 
References
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. [CrossRef]
Baker, D. H., & Vilidaite, G. (2014). Broadband noise masks suppress neural responses to narrowband stimuli. Frontiers in Psychology, 5(JUL), 1–9.
Baldwin, A. S., Baker, D. H., & Hess, R. F. (2016). What do contrast threshold equivalent noise studies actually measure? noise vs. nonlinearity in different masking paradigms. PLoS One, 11(3), 1–25. [CrossRef]
Blakemore, C., & Campbell, F. W. (1969). On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images. The Journal of Physiology, 203(1), 237–260. [CrossRef]
Bloem, I. M., & Ling, S. (2019). Normalization governs attentional modulation within human visual cortex. Nature Communications, 10(1), 1–10. [CrossRef]
Brainard, D H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. [CrossRef]
Brouwer, G. J., & Heeger, D. J. (2011). Cross-orientation suppression in human visual cortex. Journal of Neurophysiology, 106(5), 2108–2119. [CrossRef]
Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13(1), 51–62. [CrossRef]
Carrasco, M., Penpeci-Talgar, C., & Eckstein, M. (2000). Spatial covert attention increases contrast sensitivity across the csf: support for signal enhancement. Vision Research, 40(10–12), 1203–1215. [CrossRef]
Cavanaugh, J E. (1997). Unifying the derivations for the Akaike and Corrected Akaike Information Criteria. Statistics and Probability Letters, 33(2), 201–208. [CrossRef]
Correa, Á., Lupiáñez, J., Madrid, E., & Tudela, P. (2006). Temporal attention enhances early visual processing: a review and new evidence from event-related potentials. Brain Research, 1076(1), 116–128. [CrossRef]
Correa, Á., Lupiáñez, J., Milliken, B., & Tudela, P. (2004). Endogenous temporal orienting of attention in detection and discrimination tasks. Perception and Psychophysics, 66(2), 264–278. [CrossRef]
Correa, Á., Lupiáñez, J., & Tudela, P. (2005). Attentional preparation based on temporal expectancy modulates processing at the perceptual level. Psychonomic Bulletin and Review, 12(2), 328–334. [CrossRef]
Correa, Á., Sanabria, D., Spence, C., Tudela, P., & Lupiáñez, J. (2006). Selective temporal attention enhances the temporal resolution of visual perception: evidence from a temporal order judgment task. Brain Research, 1070(1), 202–205. [CrossRef]
Coull, J. T., Frith, C. D., Büchel, C., & Nobre, A. C. (2000). Orienting attention in time: behavioural and neuroanatomical distinction between exogenous and endogenous shifts. Neuropsychologia, 38(6), 808–819. [CrossRef]
Denison, R. N., Heeger, D. J., & Carrasco, M. (2017). Attention flexibly trades off across points in time. Psychonomic Bulletin and Review, 24(4), 1142–1151. [CrossRef]
Denison, R. N., Yuval-Greenberg, S., & Carrasco, M. (2019). Directing voluntary temporal attention increases fixational stability. Journal of Neuroscience, 39(2), 353–363. [CrossRef]
Dosher, B. A., Liu, S. H., Blair, N., & Lu, Z. L. (2004). The spatial window of the perceptual template and endogenous attention. Vision Research, 44(12), 1257–1271. [CrossRef]
Dosher, B. A., & Lu, Z. L. (2000a). Mechanisms of perceptual attention in precuing of location. Vision Research, 40(10–12), 1269–1292. [CrossRef]
Dosher, B. A., & Lu, Z. L. (2000b). Noise exclusion in spatial attention. Psychological Science, 11(2), 139–146. [CrossRef]
Fernández, A., Denison, R. N., & Carrasco, M. (2019). Temporal attention improves perception similarly at foveal and parafoveal locations. Journal of Vision, 19(1), 1–10. [CrossRef]
Freeman, T. C. B., Durand, S., Kiper, D. C., & Carandini, M. (2002). Suppression without inhibition in visual cortex. Neuron, 35(4), 759–771. [CrossRef]
Gould, I. C., Wolfgang, B. J., & Smith, P. L. (2007). Spatial uncertainty explains exogenous and endogenous attentional cuing effects in visual signal detection. Journal of Vision, 7(13), 4. [CrossRef]
Griffin, I. C., Miniussi, C., & Nobre, A. C. (2001). Orienting attention in time. Frontiers in Bioscience : A Journal and Virtual Library, 6(1), D660–D671. [CrossRef]
Hansen, B. C., & Hess, R. F. (2012). On the effectiveness of noise masks: naturalistic vs. un-naturalistic image statistics. Vision Research, 60: 101–113. [CrossRef]
Heeger, D J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9(2), 181–197. [CrossRef]
Lange, K., Krämer, U. M., & Röder, B. (2006). Attending points in time and space. Experimental Brain Research, 173(1), 130–140. [CrossRef]
Lawrence, M. A., & Klein, R. M. (2013). Isolating exogenous and endogenous modes of temporal attention. Journal of Experimental Psychology: General, 142(2), 560–572. [CrossRef]
Ling, S., & Blake, R. (2012). Normalization regulates competition for visual awareness. Neuron, 75(3), 531–540. [CrossRef]
Ling, S., & Carrasco, M. (2006). Sustained and transient covert attention enhance the signal via different contrast response functions. Vision Research, 46(8–9), 1210–1220. [CrossRef]
Ling, S., Liu, T., & Carrasco, M. (2009). How spatial and feature-based attention affect the gain and tuning of population responses. Vision Research, 49(10), 1194–1204. [CrossRef]
Lu, Z. L., & Dosher, B. A. (1998). External noise distinguishes attention mechanisms. Vision Research, 38(9), 1183–1198. [CrossRef]
Lu, Z. L., & Dosher, B. A. (2000). Spatial attention: different mechanisms for central and peripheral temporal precues? Journal of Experimental Psychology: Human Perception and Performance, 26(5), 1534–1548. [CrossRef]
Lu, Z. L., & Dosher, B. A. (2005). External noise distinguishes mechanisms of attention. In Neurobiology of Attention, 448–453. New York, NY: Elsevier.
Lu, Z. L., & Dosher, B. A. (2008). Characterizing observers using external noise and observer models: assessing internal representations with external noise. Psychological Review, 115(1), 44–82. [CrossRef]
McCormick, C. R., Redden, R. S., Lawrence, M. A., & Klein, R. M. (2018). The independence of endogenous and exogenous temporal attention. Attention, Perception, & Psychophysics, 80(8), 1885–1891. [CrossRef]
Milliken, B., Lupiáñez, J., Roberts, M., & Stevanovski, B. (2003). Orienting in space and time: joint contributions to exogenous spatial cuing effects. Psychonomic Bulletin and Review, 10(4), 877–883. [CrossRef]
Morrone, M. C., Burr, D. C., & Maffei, L. (1982). Functional implications of cross-orientation inhibition of cortical visual cells. I. Neurophysiological evidence. Proceedings of the Royal Society of London - Biological Sciences, 216(1204), 335–354. [CrossRef]
Nobre, A C. (2001). Orienting attention to instants in time. Neuropsychologia, 39(12), 1317–1328. [CrossRef]
Nobre, A. C., Correa, Á., & Coull, J. T. (2007). The hazards of time. Current Opinion in Neurobiology, 17(4), 465–470. [CrossRef]
Nobre, A. C., & Van Ede, F. (2018). Anticipated moments: temporal structure in attention. Nature Reviews Neuroscience, 19(1), 34–48. [CrossRef]
Nobre, A. C., & Rohenkohl, G. (2014). Time for the fourth dimension in attention. The Oxford Handbook of Attention, 1(March), 56–75.
Pelli, D G. (1985). Uncertainty explains many aspects of visual contrast detection and discrimination. Journal of the Optical Society of America A, 2(9), 1508. [CrossRef]
Pelli, D. G., & Farell, B. (1999). Why use noise? Journal of the Optical Society of America A, 16(3), 647. [CrossRef]
Pratte, M. S., Ling, S., Swisher, J. D., & Tong, F. (2013). How attention extracts objects from noise. Journal of Neurophysiology, 110(6), 1346–1356. [CrossRef]
Reynolds, J. H., & Heeger, D. J. (2009). The normalization model of attention. Neuron, 61(2), 168–185. [CrossRef]
Rohenkohl, G., Coull, J. T., & Nobre, A. C. (2011). Behavioural dissociation between exogenous and endogenous temporal orienting of attention. PLoS One, 6(1), 1–5. [CrossRef]
Rohenkohl, G., Cravo, A. M., Wyart, V., & Nobre, A. C. (2012). Temporal expectation improves the quality of sensory information. Journal of Neuroscience, 32(24), 8424–8428. [CrossRef]
Rolke, B., & Hofmann, P. (2007). Temporal uncertainty degrades perceptual processing. Psychonomic Bulletin and Reviewv 14(3), 522–526. [CrossRef]
Ruff, D. A., & Cohen, M. R. (2017). A normalization model suggests that attention changes the weighting of inputs between visual areas. Proceedings of the National Academy of Sciences of the United States of America, 114(20), E4085–E4094. [CrossRef]
Shalev, N., Nobre, A. C., & van Ede, F. (2019). Time for what? Breaking down temporal anticipation. Trends in Neurosciences, 42(6), 373–374. [CrossRef]
Solomon, J A. (2002). Noise reveals visual mechanisms of detection and discrimination. Journal of Vision, 2(1), 105–120. [CrossRef]
Solomon, J. A., Lavie, N., & Morgan, M. J. (1997). The contrast discrimination function: spatial cuing effects. Journal of the Optical Society of America. A, Optics, Image Science and Vision, 14(9), 2443–2448. [CrossRef]
The Math Works Inc. (2007). Matlab 2017b. Natick, MA: MathWorks Inc.
Watson, A. B., & Pelli, D. G. (1983). Quest: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33(2), 13–120. [CrossRef]
Weinbach, N., & Henik, A. (2012). Temporal orienting and alerting - the same or different? Frontiers in Psychology, 3(JUL), 1–3.
Zokaei, N., Board, A. G., Manohar, S. G., & Nobre, A. C. (2019). Modulation of the pupillary response by the content of visual working memory. Proceedings of the National Academy of Sciences of the United States of America 116(45), 22802–22810. [CrossRef] [PubMed]
Figure 1.
 
Simulated Perceptual Thresholds Under Predicted Attention Mechanisms Generated from the Modified Normalization Model. Red curves represent perceptual thresholds across increasing levels of noise (0–35% RMS contrast) in the absence of attention. Blue curves represent perceptual thresholds across increasing levels of noise under attention. Note how each attentional mechanism distinctly improves signal contrast thresholds across increasing levels of noise. Stimulus enhancement evokes a threshold reduction primarily at low noise levels, signal enhancement evokes threshold reduction across all noise levels, whereas noise exclusion evokes threshold reduction primarily at high noise levels. Note that stimulus enhancement and signal enhancement cannot be distinguished in the absence of noise under this framework (gray rectangle).
Figure 1.
 
Simulated Perceptual Thresholds Under Predicted Attention Mechanisms Generated from the Modified Normalization Model. Red curves represent perceptual thresholds across increasing levels of noise (0–35% RMS contrast) in the absence of attention. Blue curves represent perceptual thresholds across increasing levels of noise under attention. Note how each attentional mechanism distinctly improves signal contrast thresholds across increasing levels of noise. Stimulus enhancement evokes a threshold reduction primarily at low noise levels, signal enhancement evokes threshold reduction across all noise levels, whereas noise exclusion evokes threshold reduction primarily at high noise levels. Note that stimulus enhancement and signal enhancement cannot be distinguished in the absence of noise under this framework (gray rectangle).
Figure 2.
 
Masking Paradigm and Experimental Design. (A) Trial sequence: Each trial began with fixation, followed by the onset of the mask display after 500 ms, which remained on for the entirety of the trial (roughly 4500 ms with jitter). Within a trial, the target grating randomly appeared for 100 ms. In half the trials, this target was immediately preceded by an auditory cue (500 ms), providing precise temporal information about the target's onset. Subjects reported the target grating's orientation after target offset and were provided feedback at the end of each trial, which was followed by an inter-trial interval (ITI) preceding the next trial. (B) Examples of experimental stimuli: target gratings masked in six of ten levels of external noise (Gaussian white noise) used throughout this study.
Figure 2.
 
Masking Paradigm and Experimental Design. (A) Trial sequence: Each trial began with fixation, followed by the onset of the mask display after 500 ms, which remained on for the entirety of the trial (roughly 4500 ms with jitter). Within a trial, the target grating randomly appeared for 100 ms. In half the trials, this target was immediately preceded by an auditory cue (500 ms), providing precise temporal information about the target's onset. Subjects reported the target grating's orientation after target offset and were provided feedback at the end of each trial, which was followed by an inter-trial interval (ITI) preceding the next trial. (B) Examples of experimental stimuli: target gratings masked in six of ten levels of external noise (Gaussian white noise) used throughout this study.
Figure 3.
 
Average Perceptual Thresholds and Cuing Effect in the Fine-orientation Discrimination Task. (A) Average perceptual thresholds across increasing levels of noise and temporal cue presence (N = 11). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. (B) Average improvement in contrast sensitivity between attentional conditions, expressed as a percent increase between the cued and uncued condition. Error bars represent SEM.
Figure 3.
 
Average Perceptual Thresholds and Cuing Effect in the Fine-orientation Discrimination Task. (A) Average perceptual thresholds across increasing levels of noise and temporal cue presence (N = 11). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. (B) Average improvement in contrast sensitivity between attentional conditions, expressed as a percent increase between the cued and uncued condition. Error bars represent SEM.
Figure 4.
 
Normalization Model Fitting Results. Each plot represents an individual subject's signal contrast threshold data for each noise contrast level (black and gray dots represent cued and uncued data, respectively) and each variant of the modified normalization model of attention fit to the data from the cued condition (colored lines). Solid lines in each plot represent the winning model according to the lowest ΔAICc value for that subject. Baseline is an absence of attention coefficients/mechanisms fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. X-axis labels and tick values are identical across all subplots. Subplot titles are color-coded to match the individual model comparison results presented in Figure 5.
Figure 4.
 
Normalization Model Fitting Results. Each plot represents an individual subject's signal contrast threshold data for each noise contrast level (black and gray dots represent cued and uncued data, respectively) and each variant of the modified normalization model of attention fit to the data from the cued condition (colored lines). Solid lines in each plot represent the winning model according to the lowest ΔAICc value for that subject. Baseline is an absence of attention coefficients/mechanisms fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. X-axis labels and tick values are identical across all subplots. Subplot titles are color-coded to match the individual model comparison results presented in Figure 5.
Figure 5.
 
Model Comparison Results. Average fitting results for each variant of the normalization model (N = 11). Individual subject points are jittered horizontally for better visualization. Baseline is the normalization model with an absence of attention coefficients (mechanisms) fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. A combination of stimulus enhancement and signal enhancement had the lowest ΔAICc value on average, while signal enhancement alone closely tailed this result. Error bars represent SEM.
Figure 5.
 
Model Comparison Results. Average fitting results for each variant of the normalization model (N = 11). Individual subject points are jittered horizontally for better visualization. Baseline is the normalization model with an absence of attention coefficients (mechanisms) fit to the data from the cued condition. AST represents stimulus enhancement, AS represent signal enhancement, and AN represents external noise exclusion. A combination of stimulus enhancement and signal enhancement had the lowest ΔAICc value on average, while signal enhancement alone closely tailed this result. Error bars represent SEM.
Figure 6.
 
Cuing Did Not Improve Detection of Target Gratings. (A) Average accuracy (n = 6) in the detection task across noise mask levels and cue conditions. There was no effect of the cue on detectability, across all noise levels, ruling out an uncertainty account for our results. (B) Average perceptual thresholds across increasing levels of noise for each attentional condition in the fine-orientation discrimination task (n = 6). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. Error bars represent SEM.
Figure 6.
 
Cuing Did Not Improve Detection of Target Gratings. (A) Average accuracy (n = 6) in the detection task across noise mask levels and cue conditions. There was no effect of the cue on detectability, across all noise levels, ruling out an uncertainty account for our results. (B) Average perceptual thresholds across increasing levels of noise for each attentional condition in the fine-orientation discrimination task (n = 6). The red curve represents thresholds in the absence of the temporal cue, whereas the blue curve represents thresholds under the presence of the temporal cue. Thresholds in the cued condition are enhanced across all levels of noise. Error bars represent SEM.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×