**Animals exploit antagonistic interactions for sensory processing and these can cause oscillations between competing states. Ambiguous sensory inputs yield such perceptual multistability. Despite numerous empirical studies using binocular rivalry or plaid pattern motion, the driving mechanisms behind the spontaneous transitions between alternatives remain unclear. In the current work, we used a tristable barber pole motion stimulus combining empirical and modeling approaches to elucidate the contributions of noise and adaptation to underlying competition. We first robustly characterized the coupling between perceptual reports of transitions and continuously recorded eye direction, identifying a critical window of 480 ms before button presses, within which both measures were most strongly correlated. Second, we identified a novel nonmonotonic relationship between stimulus contrast and average perceptual switching rate with an initially rising rate before a gentle reduction at higher contrasts. A neural fields model of the underlying dynamics introduced in previous theoretical work and incorporating noise and adaptation mechanisms was adapted, extended, and empirically validated. Noise and adaptation contributions were confirmed to dominate at the lower and higher contrasts, respectively. Model simulations, with two free parameters controlling adaptation dynamics and direction thresholds, captured the measured mean transition rates for participants. We verified the shift from noise-dominated toward adaptation-driven in both the eye direction distributions and intertransition duration statistics. This work combines modeling and empirical evidence to demonstrate the signal-strength–dependent interplay between noise and adaptation during tristability. We propose that the findings generalize beyond the barber pole stimulus case to ambiguous perception in continuous feature spaces.**

*B*(

*x*,

*y*,

*t*) was generated according to Equation 1.

*L*

_{M}is the mean luminance of the screen (25.8 cd/m

^{2}),

*A*is the amplitude factor between 0 and 1, which scales the Michelson contrast,

_{C}*v*is the grating speed (6 °/s) and

*f*is the spatial frequency (0.41 c/°). In Equation 2,

*θ*= 45° and

*ψ*took values of 0°, 90°, 180°, and 270° to randomize presentations in the four oblique directions. Stimuli were generated on a Mac computer running Mac OS 10.6.8 and displayed on a ViewSonic p227f monitor (ViewSonic, Brea, CA) with a 20–in. visible screen of resolution 1024 × 768 at 100 Hz. Task routines were written using Matlab 7.10.0 (MathWorks, Natick, MA). Video routines from Psychtoolbox 3.0.9 (Brainard, 1997; Pelli, 1997) were used to control stimulus display. Eye movements were recorder using an SR Eyelink 1000 video eye tracker (SR Research, Mississauga, ON).

*c*= 0.03, 0.05, 0.08, 0.1, 0.15, and 0.2; i.e., 3%–20%). There was an 8 s wait after each trial before the observer initiated the next trial with a button press. Each block was repeated six to eight times for Task 1 and eight times for Task 2, after a couple of initial blocks to familiarize participants with the task. Bad trials (e.g., excessive blinking, containing many abrupt eye movements unlikely to be stimulus driven, or self reported “bad blocks” of lapsed attention) were discarded. Note that Task 2 has been explored extensively in a separate study, fully characterizing the changing patterns in the relative prevalence of each of the perceptual choices (Meso & Masson, 2015). The present study builds on that work with a larger data set and a focus on the analysis of perceptual switching probed dynamically with the additional tool of smooth eye movements. The use of the simplified task (Task 1) allowed inexperienced participants to perform the experiments more confidently than Task 2. The choice of design omitting useful direction information was made because the complexity added by three separate button presses (H, D, or V) made it too difficult for most inexperienced participants whose data we deemed critical for generalizing our results.

*saturation*(Equation 3) or rise to a

*peak*(Equation 4) before gently descending with contrast.

*Amp*, the exponent

*n*, the C

_{50}term

*Cf*, and for the peak function of Equation 4, there is an additional supersaturation exponent term,

*ss*. The data is fitted to both these functions using an iterative least squares process to identify the best parameters. A Kolmogorov-Smirnov goodness of fit test for significance is carried out on the results of each of the pair of fits, and the results are then further compared using the Akaike and Bayesian information criterion measures (AIC/BIC). These measures use likelihoods from the fits to determine which model provides a better explanation for the data, taking into account the number of parameters, thus penalizing less parsimonious models (Akaike, 1981; Schwarz, 1978; Wagenmakers & Farrell, 2004).

*x*and

*y*velocities was used to obtain the direction

*θ*estimated from the inverse tangent of the ratio of

_{t}*y*and

*x*components. Operations on eye direction were made in a circular space after aligning all stimulus directions (Berens, 2009). Processing described in this section was carried out using bespoke Visual C++ and Matlab routines.

*μ*, estimated over a duration defined by the first decoding parameter, a variable temporal window size

_{θ}*N*, and the resulting spread in the form of the standard deviation

*σ*,

_{θ}*N*could be in one of three configurations along the eye trace: symmetrically centered on the instance of the button press extending equally both before and after it (symmetrical, Case 1), running from the past before the button press and stopping at the button press (prebutton, Case 2), and starting from the button press and stopping some time after in the future (postbutton, Case 3). The generated mean,

*μ*, and spread,

_{θ}*σ*, parameters serve as inputs into a piecewise decision operation, which assigns a perceived direction using the second and third parameters,

_{θ}*PT*and

_{H}*PT*.

_{V}*μ*is compared to the two threshold parameters

_{θ}*PT*and

_{H}*PT*after the addition of the spread parameter scaled by a constant

_{V}*k*(fixed at

*k*= 0.25 following initial optimization), which captures the fact that it is the distribution rather than just the mean whose position within the direction space is being categorized. Predictions are then made for a range of combination of values (375,000) of simulated symmetrical, prebutton and postbutton temporal windows,

*N*, and parameters,

*PT*, to find the combination that optimizes correct prediction for each of the participants' individual data sets. To obtain estimates of chance performance for comparison as a baseline, the shuffled set of the recorded perceptual choices made by each participant are reassigned as decisions for each recorded transition.

_{H/V}*p*(

*v*,

*t*), over the continuous feature space of direction,

*v*. The initial theoretical development is fully described in our previous computational work, which details the use of bifurcation analysis to tune the early model, the basic choice of physiologically plausible model parameters for Equations 8 and 9, and the full description of the choices surrounding the model input (Rankin et al., 2014). Here, the most relevant aspects of the model developed in the current work are briefly described. The tristable dynamical system includes adaptation,

*α*(

*v*,

*t*), and noise,

*X*(

*v*,

*t*), terms acting across direction,

*v*as well as a constant input term,

*I*(

*v*), which captures the competing direction cues. The main equations describing the dynamics are:

*τ*and

_{p}*τ*. For Equation 8, the standard decay term is −

_{α}*p*and

*S*is a sigmoidal function with slope parameter,

*λ*, and threshold,

*T*, used to constrain the firing rates. The value of

*λ*is analogous to the gain of the contrast response, as estimated using the Naka-Rushton function (Naka & Rushton, 1966). When the maximum contrast response is fixed at 1, one single parameter determines the contrast sensitivity, the half-saturation response contrast, C

_{50}. Gain coefficients for the input, adaptation, and noise terms are

*k*,

_{I}*k*, and

_{α}*k*, respectively. The noise,

_{X}*X*(

*v,t*), used is generated by an Ornstein-Uhlenbeck process selected to allow linear transformation of the space–time variables. Lateral interactions across the direction space are set by a center-surround interaction kernel,

*J*, that is defined by three Fourier modes and a Mexican-hat shape, in which a local excitatory becomes inhibitory for more distant directions.

*k*) were varied (Kuznetsov, 1998). AUTO monitors the stability of states being tracked using this information to detect bifurcations, the points at which there is a sudden change to a qualitatively different type of solution for a nonlinear dynamic system as a parameter is changed. Away from bifurcation points, this tracking information tells us how the relative stability is changing with respect to parameters, a measure that provides a powerful predictive tool in the current context. For simplicity, this

_{α}*stability*output can be quantified by the real part of the eigenvalue for the nonoscillating state, which corresponds here to the direction D. We call this output

*E*. In contrast, the real part of the so-called Floquet exponent we term

*F*is similarly informative about the oscillatory or H-V (cardinal direction) states (Kuznetsov, 1998).

*E*and

*F*quantify the timescales of growth, or decay of perturbations towards steady and oscillatory states, respectively; if these measures are negative, states are stable and become less stable when the values increase.

*I*(

*v*), which is a trimodal smooth function across the continuous direction space,

*v*, with a peak centered at the diagonal (

*v*= 0), flanked by two peaks on either side (±45°). These peaks were described by Gaussian functions

*I*

_{1D}(

*v*) and

*I*

_{2D}(

*v*), with sigma width of 18° for 1D and 6° for 2D. These 1D and 2D contributions were summed to produce the input function,

*I*(

*v*). The weighting in this summation has a contrast dependence built into the 1D term.

*w*is therefore set to zero above c = ⅚ This limit marks the edge of the parameter region beyond which multistability can no longer be perceived or indeed, modeled. The resulting effect of contrast on input signal-to-noise ratio, which determines the dynamic weighting of these competing cues, is consistent with previous work (Lorenceau & Shiffrar, 1992).

_{1D}*k*set to zero and used to identify parameter regions that conform to expectations following initial psychophysics experiments. The aim is to restrict the range of values of the adaptation gain, (

_{X}*k*) so that the model can best account for the nonmonotonic relationship between perceptual switching rate and contrast. With strong adaptation (

_{α}*k*= 0.03), the onset of switching at a critical value of the contrast is sharp, and the falling phase of the switching rate is captured in this case but not the rising phase. With no adaptation (

_{α}*k*= 0), the switching rate would increase monotonically. The tuning process seeks to identify an intermediate range of values of adaptation strength (later set near

_{α}*k*≈ 0.01), at which the experimentally observed rising and falling phases of the switching rate curve are both possible.

_{α}*k*= 0.004). The goal is to generate a continuous dynamic direction output of high temporal resolution modeling the cortical global motion representation during the competition in each trial. This can be read out in various ways and compared both to the continuous eye traces and the perceptual decisions under the range of contrast conditions. The main readout subsequently reported is a count of the number of simulated percept changes between direction states, H, D, or V, over each of the 1,500 simulated trials (used as our standard number of simulated trials) carried out per contrast value over the tested range. To count switches, a pair of threshold values are applied to the dynamic peak of

_{X}*p*(

*v*,

*t*) set at a distance ±

*PT*from the diagonal.

*PT*captures the fact that, given a forced choice decision along a continuous space, participants will make a categorical forced choice decision based on boundary criteria that will vary across individuals (see also Equations 5–7;

*PT*∝

*PT*); this parameter sets this boundary between H, D, and V in direction space. The model implementation assumes symmetry between H and V in the direction space, though we note that the actual data shows biases across this space. It is our first critical free parameter when bringing the model into an operating regime in which it can be tied to individual participants. The second free parameter is the adaptation strength,

_{V}-PT_{H}*k*. Low-level visual adaptation dynamics show some variation across individuals that is captured by this parameter. In the perceptual competition between directions, small shifts in its value impact the transition rates, allowing us to adjust the simulated rates according to individual performance. Finally, the normalized direction distributions of the full dynamic model output,

_{α}*p*(

*v,t*), generated for the simulations over the range of contrasts give the predicted percept probability across the direction space. This can be compared directly to the eye direction distributions over corresponding contrast ranges. We test for changes in these distributions by fitting multimodal nested functions

*F*, of 1–3 peaks modeled by Lorentzian functions (typically sharper than Gaussians, providing a better fit to the current data) to quantify whether the modality of these distributions (i.e., one dominant peak or multiple peaks in eye direction) changed across the tested contrast range. The general form of the fitted function is,

_{D}*n*= 1), two modes (

*n*= [1,2] ), and three modes (

*n*= [1,2,3]). The fits therefore have four, seven, and 10 parameters, respectively, so the comparison between them is done by AIC and BIC to find which function best describes the number of peaks in the distributions. This is done both for individual data distributions and those from the grouped data.

*x*and

*y*directions and this nonlinear estimate is therefore highly susceptible to noise (see fourth row, left-hand column in Figure 2). Once traces were cleared of the most abrupt changes in eye position and speed, smoother traces appeared with gaps (see right-hand column of Figure 2). Underlying dynamic trends in eye direction can be seen in the noisy traces in the fourth row of the right-hand column of Figure 2, in the thick red lines that are the result of applying a generic smoothing for illustration only. A similar trend can also be seen in the same data sampled at 5 Hz in the last rows of the panels in Figure 2, where instances of button presses are also included. These example results highlight the noisy nature of the individual traces obtained, the need for appropriate filtering, which is fully detailed in the Methods section, and the inherent limitations in trying to determine instantaneous motion direction perception from this dataset without prior knowledge of the instance of button presses.

*PT*in Figure 3B) up from the diagonal, traces will typically correspond to a perception of V. To interrogate the spread within these data, the averaged eye direction traces extending through the same range (±0.75 s around each button press) are then binned into histograms (50-bin width) separated by decision (H-D-V) and contrast condition. The resulting distributions are shown in Figure 3D–I. The direction densities for the cardinal directions H and V are consistently seen to flank the D density in green on either side. The differences between the means and modes (inset in Figure 3, black and gray lines respectively) of the H and V distributions quantify how separated the peaks are and therefore show an approximate increase in separation as contrast is increased. These distributions demonstrate intuitively how the effectiveness of a decoding process depends on the distributions within the direction space, and hence, relate to the

_{V}*PT*and

_{V}*PT*parameters. They also show that we might expect different optimal values across the contrast range.

_{H}*N*(in ms) of the eye direction trace and the pair of threshold parameters

*PT*. When this decoding scheme is applied to the data for each of the participants who did Task 2, the optimum prediction results can be compared for the symmetrical window of eye directions extending equally before and after the button press (Figure 4A), the prebutton window (Figure 4B) and the postbutton window (Figure 4C). The top row shows the optimally fitted

_{H/V}*PT*values for each participant (∝

*PT*−

_{V}*PT*) plotted against temporal window size. The parameter distribution showing the least spread when the different window configurations are compared corresponds to the prebutton window (see Figure 4D), which notably uses approximately half the eye direction information than the longer symmetric window for the decision.

_{H}*PT*show a range of values that all reveal the asymmetry in the direction space for the empirical data (H-bias; see also Figure 3D through I). The complete results for optimal fitting and subsequent decoding including a comparison of the three window configurations are given by Table 2 and Figure 4H. These include the baseline performance of around 36% shown with standard deviation in horizontal blue lines of Figure 4H obtained by shuffling the response data and using them to reassign randomized decisions. The average prediction performance of the selected prebutton window is 65%, and therefore, almost 30% higher than the baseline, a performance slightly higher than the alternative window configurations. We note that we were indeed able to achieve even higher prediction performance (>80%) by optimizing separate classification parameters for each contrast. The resulting parameters from those simulations would be less general and such extensions are therefore beyond the aim and scope of the current work. The present results reliably indicate that changes in time averaged eye direction precede the button press by several hundred milliseconds reflecting the button press choice and therefore help elucidate the constrained window in which these two measures are most related. This window is about 484 ± 77 ms before the button press. We acknowledge that in this decoding result, we were unable to achieve the more difficult goal of reliably determining when a perceptual transition occurred within an eye movement trace. Although further work toward that goal continues, this may well also be limited by the variability of individual eye movement traces. We instead rely critically on the complementary nature of our two behavioral measures.

_{V}/PT_{H}*t*test for both the AIC (means −2.48 and −1.50;

*t*(11) = −1.99,

*p*= 0.036) and BIC (means −2.65 and −1.64;

*t*(11) = −2.05,

*p*= 0.032). Therefore, our results demonstrate, at both group and individual levels of analysis, that the switching rate rises fast at low contrasts, peaks at mid contrasts around 8%–10%, and then decreases gently—though at different rates for different participants.

*k*) and the contrast (

_{α}*c*). The model represents the continuous perceived direction space with the dynamic function,

*p*(

*v*,

*t*), which peaks at just one “winning” direction following the application of mutual inhibition across direction space, resulting in a peak that drifts across the direction space over time. The purpose of bifurcation analysis was to identify qualitatively different regions of interest in this parameter space and work within regions with dynamic properties comparable to the experimental data. The bifurcation curves plotted in Figure 6B were computed without noise (

*k*= 0). The curves bound three parameter regions with different dynamics: in white, a low contrast regime where the system is below threshold (the input is not detected); in light gray, regular oscillations (switches) are driven by adaptation, and in dark gray, no switching is predicted in the deterministic (no noise) case. When noise is added (

_{x}*k*≠ 0), switching in the light gray region of Figure 6B remains driven by adaptation and is therefore labeled “a.” In the dark gray region of the same figure with zero or very low adaptation strength, switches can only be driven by noise so we label this region “n.” In a parameter tuning explained in the Methods section, we identify an operating regime for the model in which the switching rate relationship found in the experiments (Figure 5) is best accounted for. This zone lies near the transition between the regions n and a (thick black curve in Figure 6B, within the dotted rectangle labeled n/a). Within this rectangle, there is a shift in the dominant mechanisms driving the switching behavior from noise to adaptation as contrast is increased, shown by the gradation of shading from dark to light gray.

_{x}*k*, and the perceptual threshold,

_{α}*PT*, which fine-tune the switching rate functions to allow them to vary with the range of trends observed in the experiments (Figure 5A and B).

*PT*demarcates the direction space separating what is considered D from H and V. It defines the symmetrical distance from the diagonal at D = 45°: when the value of the peak of the dynamic function,

*p*(

*v,t*), falls below 45 −

*PT*, direction is H (i.e.,

*PT*in the eye movements), and above 45 +

_{H}*PT*, the direction is V (i.e.,

*PT*in the eye movements). We assume symmetry in the model even though this is not the case in the eye traces, where an H bias can be seen. The effect this parameter has on simulated switching rates is shown in Figure 6C, where it is seen to shift the function up or down and change the steepness of the low contrast rise. The

_{V}*k*parameter determines the strength of adaptation controlling its depth of modulation across the direction space. When it is finely controlled, restricting it to within the rectangle identified in the bifurcation analysis, this parameter is seen to shift the position of the switching rate peak and the extent to which there is a reduction in switching rate at higher contrasts (see Figure 6D). Finally, the stability parameters obtained during the bifurcation analysis and described in the Methods section, the Floquet exponent,

_{α}*F*(red trace), corresponding to the adaptation driven transitions and the eigenvalue of the steady solution corresponding to the noise driven transitions,

*E*(black trace), are plotted for the contrast range in Figure 6E. The change in stability predicted with increasing contrast can be visualized through potential wells within which gravity acts on a particle, illustrated in Figure 6F. Dominance in the D directions shifts systematically towards the cardinal directions as stimulus contrast is increased.

*PT*and

*k*) for each of the 12 data sets so that we can compare the empirical and simulated switching rate functions by plotting them together (see Figure 7; compare simulated gray squares with data in black circles). We find that through variation in the two parameters, the model is able to closely capture the full range of trends reported by the participants across both tasks. Note that the range of fitted

_{α}*PT*values in the simulations is as broad as that seen in the eye movement data of Task 2 (Figure 4A through C).

*PT*and

*k*parameters for all the data combined (

_{α}*PT*= 13.125,

*k*= 0.01125), simulations are then run across the contrast range to extract additional statistical properties of the perceptual switching. The first of these is the coefficient of variation (CV) of the time between switches for each contrast, which is calculated as the variance of percept durations divided by the average of percept durations. The results are shown in Figure 8. The model predicts a gradual reduction in CV (gray dashed trace) from 0.85 as contrast is increased from 3% to about 10% before the value plateaus at around 0.65, which is maintained for the rest of the contrast range. A similar trend is observed for the experimental data (black circles) with the standard error of CV across participants given by the error bars. This trend is driven by a shift from highly variable percept durations for the low-contrast noise-driven transitions, toward slightly less variable durations during the adaptation dominated transitions at higher contrasts. The model predicts a slightly higher variability than the experiments in the low-contrast range where noise dominates, but model and data converge at the mid and higher contrasts.

_{α}*p*(

*v,t*) obtained from 1,500 simulations of 15 s each are plotted for three contrasts: 3%, 8%, and 15%. It can be seen in this prediction that there is a clear transition from a unimodal distribution centered on the diagonal direction (black dashed trace) towards bimodal distributions that increase in separation as contrast is increased (gray dashed lines). To test this prediction with the experimental data, we use eye movements recorded from Task 2 and separate out eye directions obtained for H, D, and V button presses restricted a critical 450 ms time window before the button press and plot the peak direction (and interquartile range as error bars) for the contrast range tested (see Figure 9B). What we see is a progressive increase in the separation of peaks as contrast is increased, consistent with the predicted increase in stability, the same trend as the simulations but asymmetrically skewed towards the H direction in the empirical data. The use of the peak of the

*p*(

*v,t*) function was necessary because when the full continuous function is considered unrestricted to the 450 ms decision window, the resulting distribution is broader and does not clearly show the separate underlying peaks (see Figure 9C, obtained for the same simulations as 9A). Because eye direction in the current experiment was a continuous empirical measure, these simulated distributions were analogous to those produced when all the eye direction data from both tasks were plotted combining all the H-D-V button presses indiscriminately, in Figure 9D. The eye data distributions no longer show clearly discernible peaks, but get progressively wider with increasing contrast.

*, 300, 523–525.*

*Nature**, 16 (1), 3–14.*

*Journal of Econometrics**, 27, 77–87.*

*Biological Cybernetics**, 103 (3), 1275–1282.*

*Journal of Neurophysiology**, 48 (4), 501–522.*

*Vision Research**, 31 (10), 1–21.*

*Journal of Statistical Software**, 10 (4), 433–436.*

*Spatial Vision**, 13 (1), 51–62.*

*Nature Reviews Neuroscience**, 39, 915–932.*

*Vision Research**, 37, 705–720.*

*Vision Research**, 12, 287–307.*

*Spatial Vision**, 334B (5–6), 1049–1073.*

*Journal of the Franklin Institute-Engineering and Applied Mathematics**, 43 (9), 1035–1045.*

*Vision Research**, 94 (6), 4412–4420.*

*Journal of Neurophysiology**, 367 (1591), 942–953.*

*Philosophical Transactions of the Royal Society B. Biological Sciences**, 43 (5), 531–548.*

*Vision Research**. New York: Springer-Verlag.*

*Elements of applied bifurcation theory*(2nd ed. Vol. 112)*, 44, 503–507.*

*Neurocomputing**, 12 (1), 39–53.*

*Journal of Computational Neuroscience**, 17 (2), 215–228.*

*Perception**, 259 (1354), 71–76.*

*Proceedings of the Royal Society B. Biological Sciences**, 3, 44–53.*

*Neural Computation**, 3 (7), 254–264.*

*Trends in Cognitive Sciences**, 57 (3–4), 225–238.*

*British Journal of Psychology**, 66 (4), 477–491.*

*Neuron**, 30, 1409–1419.*

*Vision Research**, 130 (5), 748–768.*

*Psychological Bulletin**, 32, 263–273.*

*Vision Research**, 33, 1207–1217.*

*Vision Research**, 36 (1), 1–25.*

*Neuroscience & Biobehavioral Reviews**, 17 (5), 753–767.*

*Visual Neuroscience**, 49, 201–208.*

*Biological Cybernetics**, 52 (6), 367–376.*

*Biological Cybernetics**, 107, 113–123.*

*Vision Research**, 116 (3): 1250–1260.*

*Journal of Neurophysiology**, 98 (3), 1125–1139.*

*Journal of Neurophysiology**, 50 (8), 818–828.*

*Vision Research**, 185 (3), 587–599.*

*The Journal of Physiology**, 28, 747–753.*

*Vision Research**, 98 (2), 314–322.*

*Experimental Brain Research**, 409, 1040–1042.*

*Nature**, 24 (13), 3268–3280.*

*Journal of Neuroscience**, 10 (4), 437–442.*

*Spatial Vision**, 277, 435–445.*

*Neuroscience**, 14, 7367–7380.*

*Journal of Neuroscience**, 14, 7357–7366.*

*Journal of Neuroscience**, 36 (14), 3903–3918.*

*Journal of Neuroscience**, 36 (2), 193–213.*

*Journal of Computational Neuroscience**, 34 (1), 103–124.*

*Journal of Computational Neuroscience**, 15 (7), 1439–1475.*

*Neural Computation**, 6 (2), 461–464.*

*Annals of Statistics**, 30 (1), 1–10.*

*Vision Research**, 106 (5), 2136–2150.*

*Journal of Neurophysiology**, 29, 619–626.*

*Vision Research**, 97 (1), 462–473.*

*Journal of Neurophysiology**, 27 (1), 37–54.*

*Journal of Computational Neuroscience**, 51 (8), 836–852.*

*Vision Research**, 36, 1291–1310.*

*Vision Research**, 111 (Pt. A), 43–54.*

*Vision Research**, 5: 145.*

*Frontiers in Human Neuroscience**, 67, 110–120.*

*Neural Networks**, 50 (17), 1676–1692.*

*Vision Research**, 3, 270–276.*

*Nature Neuroscience**, 26 (12), 2612–2622.*

*Journal of the Optical Society of America A: Optics, Image Science, and Vision**, 11 (1), 192–196.*

*Psychonomic Bulletin & Review**, 20, 325–380.*

*Psychologische Forschung**, 100 (24), 14499–14503.*

*Proceedings of the National Academy of Sciences USA**, 12, 1–24.*

*Biophysical Journal**, 90 (4), 256–263.*

*Biological Cybernetics*