To determine which attention mechanism best characterized the observed attention effect, we first fit the reduced normalization model (
Equation 1) to each subject's signal contrast thresholds from the uncued condition. This model is essentially a modified Naka-Rushton:
\begin{equation}d^{\prime} = d{{\rm{^{\prime}}}_{max}} \times \left( {\frac{{c_S^n}}{{c_S^n + c_N^n + c_{50}^n}}} \right)\end{equation}
where
d′ represents discriminability or perceptual sensitivity;
d′
max, maximum perceptual sensitivity;
cS, contrast of the signal (the target grating);
cN, contrast of the noise mask;
c50, semi-saturation point; and
n, dynamic range or a nonlinear transducer. The parameters that represent attention mechanisms—stimulus enhancement, signal enhancement, and external noise exclusion—are excluded from this reduced model to establish a baseline in the absence of attention. Solving for the observer's signal contrast threshold in this reduced model generates predicted threshold versus contrast curves (
Equation 2;
Blakemore & Campbell, 1969).
\begin{equation}
{c_S} = {\left( {\frac{{d{\rm{^{\prime}\;}} \times {\rm{\;}}\left( {c_N^n + c_{50}^n} \right)}}{{d{{\rm{^{\prime}}}_{max}} - d{\rm{^{\prime}}}}}} \right)^{ {1 / n}}}
\end{equation}
Using nonlinear regression, we fit each subject's signal contrast thresholds in the uncued condition with this reduced model. Initial parameter values for
d′
max,
c50, and
n were chosen based on a series of grid searches for the most optimal initial parameter values that generated the lowest sum of squared errors, then estimated using the
fmincon function in MATLAB. Next, we fit variants of the modified normalization model to the measured signal contrast thresholds from the cued condition. Each variant of the model allowed a different attentional coefficient, or combination of attentional coefficients, to vary while fixing
d′
max,
c50,
n to the estimated values from the reduced (baseline) normalization model. The full normalization model, including all attention mechanisms, is expressed as follows (
Equation 3):
\begin{equation}\!\!\!\begin{array}{l}d^{\prime} = d{{\rm{^{\prime}}}_{max}} \times \left( {\frac{{{A_{St}} \times {A_S} \times c_S^n}}{{{A_{St}} \times {A_S} \times c_S^n + {A_{St}} \times {A_N} \times c_N^n + c_{50}^n}}} \right)\end{array}\end{equation}
ASt is the stimulus enhancement coefficient, acting on both the signal,
cS, and noise,
cN.
AS is the signal enhancement coefficient, acting solely on the signal. Finally,
AN is the noise exclusion coefficient, acting strictly on the external noise. All attention coefficients were constrained to be between values of 0 and 5, where a value of 0 produces a complete suppression of the response to a stimulus component (signal =
cS, or noise =
cN depending on the attention coefficient), a value of one produces no attentional modulation compared to the reduced model, and values greater than one produce attentional modulation that enhances a stimulus component. Solving for signal contrast thresholds results in the following expression (
Equation 4):
\begin{equation}{c_S} = {\left( {\frac{{d{\rm{^{\prime}\;}} \times {\rm{\;}}\left( {{A_{St}} \times {A_N}{\rm{\;}} \times c_N^n + c_{50}^n} \right)}}{{{A_{St}} \times {A_S} \times \left( {d{{\rm{^{\prime}}}_{max}} - d{\rm{^{\prime}}}} \right)}}} \right)^{ {1 / n}}} \end{equation}
Each attention mechanism and combination of attention mechanisms were accounted for, resulting in a total of six additional variations of the modified normalization model to fit to the cued condition's data to for each subject, using the fmincon function in MATLAB.
To evaluate which mechanisms could most parsimoniously account for our data, we used a corrected version of the Akaike Information Criterion (AICc;
Akaike, 1974;
Cavanaugh, 1997). This metric accounts for the number of observations and free parameters in a model to estimate the relative amount of information loss. The lower the AICc value, the better a given model explains the data. If we compute the difference between all AICc values and the minimum AICc value from each variant of the model, for each subject, we expect that the better a model, the closer to zero the difference will be on average.