**The recent history of perceptual experience has been shown to influence subsequent perception. Classically, this dependence on perceptual history has been examined in sensory-adaptation paradigms, wherein prolonged exposure to a particular stimulus (e.g., a vertically oriented grating) produces changes in perception of subsequently presented stimuli (e.g., the tilt aftereffect). More recently, several studies have investigated the influence of shorter perceptual exposure with effects, referred to as serial dependence, being described for a variety of low- and high-level perceptual dimensions. In this study, we examined serial dependence in the processing of dispersion statistics, namely variance—a key descriptor of the environment and indicative of the precision and reliability of ensemble representations. We found two opposite serial dependences operating at different timescales, and likely originating at different processing levels: A positive, Bayesian-like bias was driven by the most recent exposures, dependent on feature-specific decision making and appearing only when high confidence was placed in that decision; and a longer lasting negative bias—akin to an adaptation aftereffect—becoming manifest as the positive bias declined. Both effects were independent of spatial presentation location and the similarity of other close traits, such as mean direction of the visual variance stimulus. These findings suggest that visual variance processing occurs in high-level areas but is also subject to a combination of multilevel mechanisms balancing perceptual stability and sensitivity, as with many different perceptual dimensions.**

*serial dependences*have been found for several low- and high-level features (Cicchini, Anobile, & Burr, 2014; Fischer & Whitney, 2014; John-Saaltink, Kok, Lau, & de Lange, 2016; Liberman, Fischer, & Whitney, 2014; Xia, Liberman, Leib, & Whitney, 2015). It has been proposed that these two different effects contribute in opposite ways to the tuning of the balance between perceptual sensitivity and stability: While negative adaptation produces a normalization of neural representations in order to maximize sensitivity to changes around the most frequent stimulus intensity, serial dependence contributes to perceptual stability by smoothing out discrete discontinuities as sensory noise (Fischer & Whitney, 2014).

^{2}). The cluster spanned 5° of visual angle (°va) along the horizontal and vertical dimensions and comprised 100 light-gray dots (diameter = 0.11°va, luminance = 43.14 cd/m

^{2}) moving along a straight trajectory at a rate of 2 pixels/frame (8.45°va/s). The initial position of each dot was uniformly randomized (excluding overlap with other dots), and its coordinates were updated per frame by a trigonometric calculation based on the individual dot's angular-motion direction, re-entering the cluster from the opposite side if it reached a boundary. Each dot's motion direction was extracted from a circular Gaussian (von Mises) distribution that varied for each stimulus presentation: Its mean could take any random integer value from 0° to 359°

^{,}and its standard deviation was pseudorandomized among six possible values—namely 5°, 10°, 20°, 30°, 40°, and 60°. This parameter, standard deviation of RDK motion (StD), is the dimension of interest in this experiment.

*n*was valid but Trial

*n*− 3 was not, Trial

*n*was not included in analyses regarding serial dependence associated to position

*n*− 3 or further backwards. A total of 12,480 trials entered the analysis.

*R*) for each StD value and visual eccentricity. Showing that participants were able to perceive the different levels of variance presented in the experiment, reports were positively correlated and monotonically increased with stimulus StD for both foveal and peripheral presentations.

_{n}*) and eccentricity—on participant's responses. Both main effects and their interaction were significant (sphericity correction was applied by the Greenhouse–Geisser method). For StD*

_{n}*, the main effect yielded*

_{n}*F*(1.825, 45.621) = 473.80,

*p*< 0.001,

*F*(1,25) = 33.32,

*p*< 0.001,

*t*(25) = 8.237,

*p*< 0.001, Cohen's

*d*= 1.615. The StD

*× Eccentricity interaction was also significant,*

_{n}*F*(2.715, 67.882) = 20.06,

*p*< 0.001,

*values, as shown in Figure 2A. These results were confirmed in a Bayesian repeated-measures ANOVA with the same variables: The full model (both main effects and interaction) was the most explanatory according to the Bayes factor, outperforming the second best (only the two main effects) by a factor of BF*

_{n}_{full/main}= 1.075 × 10

^{6}. These findings (lower responses in periphery than in fovea, especially for large StD

*) seem to relate to a greater regression to the mean exhibited in responses about peripheral stimuli (likely due to worse discrimination between stimulus levels), combined with the fact that the range of the response scale allows for larger errors by overestimation than underestimation.*

_{n}*level and eccentricity on response dispersion. The main effect for StD*

_{n}*yielded*

_{n}*F*(2.994, 74.840) = 58.426,

*p*< 0.001,

*F*(1, 25) = 4.165,

*p*= 0.052,

*t*(25) = −1.738,

*p*= 0.086, Cohen's

*d*= −0.339. Last, the effect of the StD

*× Eccentricity interaction was*

_{n}*F*(3.530, 88.244) = 4.757,

*p*= 0.002,

*, eccentricity, and interaction), which outperformed the second best (with only StD*

_{n}*) by a factor of BF*

_{n}_{full/StDn}= 8.747. In summary, response dispersion increased with stimulus (StD) level and there was a (nearly significant) trend toward greater response dispersion for peripheral presentations, especially at large StDs, suggesting a slightly worse performance at 20°va eccentricity compared to 0°va, in agreement with the previous finding of a greater regression to the mean in peripheral responses.

*n*− 1) or at positions further backward in trial history (Trial

*n*−

*t*). Thus, the response variable in our analyses of serial dependence, unless stated otherwise, is the normalized response error relative to the current stimulus (zRE

*). Response errors, defined as RE*

_{n}*= (R*

_{n}*− StD*

_{n}*)/StD*

_{n}*, are normalized by the distribution of reports provided by each individual for the level of StD presented in the current trial. Thus, zRE*

_{n}*sums to zero across all trials for a given participant and StD*

_{n}*level: A negative zRE*

_{n}*indicates that the participant provided a below-average response in that trial compared to their responses for other physically identical stimuli, while a positive zRE*

_{n}*indicates an above-average response. Therefore, normalization ensures that the value of the response variable zRE*

_{n}*is independent of the current StD*

_{n}*level and of each participant's global scoring biases.*

_{n}*as a function of the previous stimulus (StD*

_{n}

_{n}_{−1}), plotted separately by eccentricity. Regardless of generally lower reports at larger eccentricity, a trend toward larger zRE

*for higher StD*

_{n}

_{n}_{−1}values is evident for all trials pooled as well as for both foveal and peripheral presentations, as shown by the ascending slope of the three plots (Fovea, Periphery, All). In other words, there was a relative overestimation of the current stimulus when the previous stimulus had a large StD, and a relative underestimation when the previous StD was small, compared to other trials in identical conditions of eccentricity. This indicates a positive (attractive, Bayesian-like) bias driven by Trial

*n*− 1: Current responses resemble the previous stimulus—serial dependence for visual variance.

_{n}_{−1}level (as a within-subject factor) on current variance reports (zRE

*). The effect of StD*

_{n}

_{n}_{−1}was statistically significant,

*F*(3.231, 93.697) = 7.221,

*p*< 0.001,

_{n}_{−1}compared to the null model (both of them included participant as grouping variable) was BF

_{inclusion}= 56,187.91, indicating

*extreme*(Wagenmakers et al., 2017) evidence for superior explanatory ability of the model that included this term.

*(as dependent variable) with two within-subject factors: StD*

_{n}

_{n}_{−1}and each of the features of interest separately (eccentricity, retinal location, and similarity of means).

*F*

_{eccentricity}(1, 25) = 31.004,

*p*< 0.001,

_{=}0.554;

*F*

_{StDn−1}(2.662, 66.556) = 7.029,

*p*< 0.001,

*F*(3.789, 94.722) = 1.710,

*p*= 0.157,

_{n}_{−1}. To formally test this hypothesis, we turned to Bayesian repeated-measures ANOVA. Table 1a summarizes the comparisons between all competing models. The largest Bayes factor corresponds to the model including both main effects but not the interaction (BF

_{10}= 3.432 × 10

^{29}), which outperforms the model that also includes the StD

_{n}_{−1}× Eccentricity interaction by a factor of BF

_{main/full}= 17.645—strong evidence (Wagenmakers et al., 2017) against its inclusion and in favor of the conclusion that while there is an overall difference in reports, there is no difference in serial dependence across eccentricity.

_{n}_{−1}(BF

_{10}= 2.073), while the worst model also includes the hemifield and the StD

_{n}_{−1}× Hemifield interaction (BF

_{10}= 0.120). This indicates moderate evidence against the full model (including interaction) compared to the null, and strong evidence against it when compared to the most explanatory model—that is, the one with StD

_{n}_{−1}only (BF

_{full/}

_{StDn−1}= 0.058). These results support the hypothesis of serial dependence being unaffected by the spatial location of consecutive stimuli. To confirm the absence of tuning by spatial proximity, we further assessed the strength of serial dependence separately for trials with repeated versus opposite hemifield location with respect to the previous stimulus. Results for these analyses are presented in the Supplementary Materials, section 1 (see also Supplementary Figure S1). While the data of Experiment 1 suggest a nonsignificant trend toward a stronger serial-dependence effect for same presentation locations, these results are not confirmed by the data of other experiments; for Experiment 3, the trend goes in the opposite direction.

_{n}_{−1}and mean difference). As shown in Table 1c, the best model included only StD

_{n}_{−1}(BF

_{10}= 3.210 × 10

^{5}), whereas the model including both main effects and their interaction was the second worst (after the one with mean difference only), with BF

_{10}= 0.281. The Bayes factor for inclusion of the interaction term indicated extreme evidence against it (BF

_{inclusion}= 3.491 × 10

^{−6}); this was also the case if the comparison was made between the full model and the model lacking only the interaction (BF

_{main/full}= 1789.55). This lack of association of mean similarity and serial dependence in variance was further confirmed in a different experiment (detailed in Supplementary Materials, section 2) that used a limited range of mean trajectories, allowing for only four between-trials differences (0°, 35°, 55°, and 90°).

*as a dependent variable and StD*

_{n}

_{n}_{−}

*(*

_{t}*t*= 1, …, 10) as an independent predictor, with random effects grouped by participant. We chose a uniform prior distribution over the real numbers for the fixed-effects coefficient and for the standard deviation of the by-subject varying intercepts and slopes, and an LKJ prior with shape parameter

*η*= 2.0 for the random-effects correlation matrices. Unless stated otherwise, analogous priors were established for other Bayesian LMMs reported in this article. Fixed-effects coefficient estimates were largely insensitive to prior selection, as can be seen in the example presented in the Supplementary Materials (section 3, Supplementary Figure S2).

*n*− 10) and current report for all trials, as well as per eccentricity. The value of the LMM fixed-effects coefficient estimate for the effect of StD

_{n}_{−}

*on zRE*

_{t}*represents the linear slope for the relationship between the StD presented in Trial*

_{n}*n*−

*t*and the normalized response error provided in the current trial: in other words, the variation (in

*z*scores) in zRE

*when StD*

_{n}

_{n}_{−}

*increases by 1°. Therefore, a positive*

_{t}*B*coefficient represents an attractive bias: A larger StD in a past trial drives a larger response in the present one, regardless of the current stimulus. Conversely, a negative

*B*coefficient represents a repulsive bias.

*B*coefficient estimates for the effect of StD

_{n}_{−1}and StD

_{n}_{−2}on zRE

*are positive, indicating an attractive bias. For StD*

_{n}

_{n}_{−1}(all trials pooled),

*B*= 0.0034 [0.0017, 0.0051] suggesting that regardless of the value of StD

*, participants' judgments of visual variance increased by a magnitude of 0.0034 (*

_{n}*z*score) per 1° increase in previous-trial StD (StD

_{n}_{−1}). The effect of StD

_{n}_{−2}is weaker but still present:

*B*= 0.0014 [0.0003, 0.0026]. To make clear the size of these effects, we can consider absolute responses as an outcome variable (adding the current StD

*and the interaction with StD*

_{n}

_{n}_{−1}to the models). Here, the increase is 0.0586 (0.0272–0.0892) units per unit of StD

_{n}_{−1}, or an attractive effect of 5.9% toward the previous stimulus, whereas for StD

_{n}_{−2}the effect size is 0.0242 (0.0006–0.0483), or 2.4%.

*n*) are attracted by a small but meaningful amount toward the variance presented in the previous trial (

*n*− 1) and, to a lesser extent, the trial before that (

*n*− 2). Note that since the initial position of the response bar is randomized for each trial, simple motor routines involved in response execution cannot explain this serial dependence.

*n*− 1 and

*n*− 2) to negative

*B*coefficient values is observed for less recent presentations, indicative of a negative (i.e., repulsive, anti-Bayesian) bias: Current responses were

*less*similar to the StD presented in those trials, in a manner akin to sensory-adaptation aftereffects (Kohn, 2007; Payzan-LeNestour et al., 2016). This effect started at Trial

*n*− 4, peaked at Trials

*n*− 7 to

*n*− 9 (StD

_{n}_{−8}:

*B*= −0.0021 [−0.0032, −0.0010]), and started to decline afterward. Similar effect sizes and timescales are observed for foveal and peripheral presentations (see Figure 2C).

*n*− 9 but persists to some extent until Trial

*n*− 20.

_{n}_{+1}) and to shuffled data (see Supplementary Materials, section 5, Supplementary Figure S4). These analyses confirm that only in the true trial history is there evidence for the obtained negative and positive aftereffects, supporting the conclusion that these effects are not simply due to statistical artifacts.

*n*− 1 and

*n*− 2) trials and a repulsive, negative bias which operates on a longer timescale.

*) as a function of the previous trial's StD (StD*

_{n}

_{n}_{−1}) and type—that is, whether

*n*− 1 was a response or a no-response trial. The ascending and roughly parallel plots for each trial (

*n*− 1) type suggest that serial dependence in relation to StD

_{n}_{−1}was similar in magnitude and sign (i.e., attractive effect) regardless of whether Trial

*n*− 1 was a response or a no-response trial. To formally test this observation, we conducted a Bayesian repeated-measures ANOVA on the effect of StD

_{n}_{−1}and Trial

*n*− 1 type (as within-subject factors) on zRE

*. A comparison of all possible models based on the results of this analysis is shown in Table 2A. The best model includes only StD*

_{n}

_{n}_{−1}(BF

_{10}= 2.386 × 10

^{6}). There was strong evidence against the inclusion of the StD

_{n}_{−1}× Trial

*n*− 1 type interaction: BF

_{inclusion}= 0.051. In a direct comparison between the main-effects model and the full model, the ratio was given by BF

_{main/full}= 10.75. This lack of interaction confirmed that the effect of StD

_{n}_{−1}on current reports was independent of response execution.

_{n}_{,}with StD

_{n}_{−}

*(*

_{t}*t*= 1, …, 10) as putative predictor, split by Trial

*n*−

*t*type and modeled separately. A similar pattern in terms of effect size and direction can be seen regardless of whether previous trials required response or not: an attractive bias in relation to the latest two trials (weaker for

*n*− 2), a roughly zero effect of Trial

*n*− 3, and a reversal toward a negative effect peaking around Trials

*n*− 5 to

*n*− 9, with a similar magnitude and timescale as for Experiment 1.

*n*, and a Trial

*n*−

*t*whose serial effect is considered) could affect the degree to which the effect of further trials carried through. For simplicity, we considered only the case of serial dependence related to Trial

*n*− 2 (for the sake of homogeneity, we limited the analysis to those response trials wherein Trial

*n*− 2 was also a response trial) and classified the data set according to whether the intermediate trial (

*n*− 1) was a response or a no-response trial. We ran a Bayesian repeated-measures ANOVA on the effect of StD

_{n}_{−2}and Trial

*n*− 1 type (as within-subject factors) on zRE

*. The best model contained only StD*

_{n}

_{n}_{−2}(BF

_{10}= 30.045), outperforming the full model (two factors and interaction) by a factor of 12.87. However, when the comparison was made between the full model and the equivalent model stripped of the effect of interest (i.e., the StD

_{n}_{−2}× Trial

*n*− 1 type interaction), the latter outperformed the former by a factor of only BF

_{main/full}= 1.98. Overall, the Bayes factor for inclusion of the interaction term indicated moderate evidence against it (BF

_{inclusion}= 0.261), suggesting that the attractive bias related to previous trials is not disrupted (nor boosted) by the participant providing a response on the intermediate trials.

*) as a function of StD*

_{n}

_{n}_{−1}and Trial

*n*− 1 type—that is, whether it required a decision about variance (RAN) or mean (DIR). Only when successive decisions were both regarding variance do we see an ascending slope in relation to increasing StD

_{n}_{−1}, suggesting that the attractive bias associated with StD

_{n}_{−1}was only exerted if a decision on that dimension had been made.

_{n}_{−1}and Trial

*n*− 1 type (RAN/DIR) as within-subject factors. The most explanatory was the full model including both main effects and their interaction (BF

_{10}= 48.459), although the evidence in its favor compared to the model with only the main effects was anecdotal (BF

_{full/main}= 2.026). However, evidence in favor of the interaction term was larger when taking into consideration all possible models: BF

_{inclusion}= 5.371, which is moderate evidence. Thus, results point to serial dependence by StD

_{n}_{−1}being dependent on which dimension participants had to judge in the previous trial.

*t*test: BF

_{10}= 29.63). We therefore wondered whether time could be confounding the interaction between StD

_{n}_{−1}and Trial

*n*− 1 response type, since it has been shown to influence serial dependence in previous studies (Bliss, Sun, & D'Esposito, 2017; Fritsche et al., 2017; Kanai & Verstraten, 2005). To test this possibility we defined time

_{n}_{−1,}

*as the interval between consecutive stimulus onsets, binned into two levels, either below or above the participant's median. This variable was added as a third within-subject factor to the Bayesian repeated-measures ANOVA described in the previous paragraph. We sought to directly compare two explanatory hypotheses for the cause of the observed difference in serial dependence by StD*

_{n}

_{n}_{−1}when

*n*− 1 was a RAN compared to a DIR trial: Trial

*n*− 1 type or interstimulus time. Thus, we compared the explanatory power of a model with StD

_{n}_{−1}, Trial

*n*− 1 type, and their interaction against a model with StD

_{n}_{−1}, time

_{n}_{−1,}

*, and their interaction. The former outperformed the latter by a factor of 105.37, indicating extreme evidence in its favor. Overall, analysis of each separate effect indicated extreme evidence against inclusion of the StD*

_{n}

_{n}_{−1}× time

_{n}_{−1,}

*interaction (BF*

_{n}_{inclusion}= 6.668 × 10

^{−4}). This indicated that the difference between serial dependence driven by RAN compared to DIR trials was better explained by the trial type itself rather than by the intertrial time. There was no support for an independent contribution of time to the observed difference between RAN and DIR trials.

_{n}_{−}

*(*

_{t}*t*= 1, …, 10) and zRE

*, after splitting the data set according to the trial type at each position; thus, the influence of RAN and DIR trials is modeled separately by 20 Bayesian LMMs. As expected from the previous analysis, the positive effect related to StD*

_{n}

_{n}_{−1}is present only when those trials required participants to report variance; this is also the case for StD

_{n}_{−2.}As for the negative effect appearing at longer timescales, it is clearly present in RAN trials, while for DIR trials, although the credible intervals for the coefficient contain zero at all trial positions (likely due to the smaller number of DIR trials), the negative effect seems to appear as early as Trial

*n*− 1 (

*B*= −0.0021 [−0.0051, 0.0009]), peak at Trial

*n*− 5 (

*B*= −0.0023 [−0.0052, 0.0007]), and decrease afterward. The appearance of a negative serial dependence regardless of the task suggests that it may be sensory in origin—an adaptation aftereffect.

*n*− 1, we should also ask why there is no such effect at

*n*− 3. Thus, having established that positive serial dependence arises from feature-specific decision making, we investigated the inverse question: What is the contribution of feature-specific decision making to the fading of positive serial dependence for trials located further away in history? Is this decline affected in a different way by subsequent decisions made on the same, compared to a different, feature dimension? Like for Experiment 2A, we considered all those RAN trials for which Trial

*n*− 2 was also of type RAN, and examined the association between StD

_{n}_{−2}and current report in relation to the intermediate trial's (

*n*− 1) task. An explanatory role for the StD

_{n}_{−2}× Trial

*n*− 1 response type interaction would indicate that the intermediate trial type influenced serial dependence related to

*n*− 2. In a Bayesian two-factor repeated-measures ANOVA, the best model included only Trial

*n*− 1 response type (BF

_{10}= 41.799), suggesting that there was no interaction with serial dependence related to StD

_{n}_{−2}.

*C*) plotted by current-stimulus StD (StD

_{n}*) and eccentricity. For both foveal and peripheral trials, a trend toward decreasing*

_{n}*C*for larger StD

_{n}*is observed, except for the maximal StD (60°). For each StD value, confidence scores are lower in the periphery. To test these observations, we conducted a Bayesian repeated-measures ANOVA on the effect of StD*

_{n}*and eccentricity (as within-subject factors) on*

_{n}*C*. The best model was the one including both main effects only (BF

_{n}_{10}= 6.657 × 10

^{26}), outperforming the full model with the StD

*× Eccentricity interaction by a factor of BF*

_{n}_{main/full}= 9.615. This indicates that despite the overall lower confidence scores in peripheral blocks, the relationship between different stimulus levels and confidence is the same regardless of eccentricity.

*= |StD*

_{n}*−*

_{n}*R*|. In a Bayesian LMM with

_{n}*C*as dependent variable and

_{n}*E*, StD

_{n}*, and their interaction as independent variables,*

_{n}*C*reports are inversely associated with error size (

_{n}*B*= −0.0083, 95% credible interval [−0.0103, −0.0062]) and StD

*(*

_{n}*B*= −0.0056 [−0.0071, −0.0040]) and positively associated with the interaction between both (

*B*= 0.0003 [0.0002, 0.0003]). The inverse association between error size and

*C*suggests that participants' reports of confidence are, at least in part, grounded in task accuracy. Furthermore, the positive sign of the coefficient estimate for the

_{n}*E*× StD

_{n}*interaction suggests that confidence tracks relative rather than absolute error: The inverse association between error size (defined as an absolute value) and confidence is weighted down for large StD values. Considering both error size and eccentricity, the negative association with error size remains (*

_{n}*B*

_{error}= −0.0078 [−0.0102, −0.0055]), whereas foveal presentations are associated with higher confidence reports independent of task accuracy (

*B*

_{eccentricity}= 0.0510 [0.0080, 0.0908]). However, the interaction term does not show evidence of a different evaluation of increases in error size in low compared to high eccentricities (

*B*

_{error×eccentricity}= −0.0013 [−0.0040, 0.0016]).

*B*= −0.0101 (95% credible interval [−0.0160, −0.0045]). When we add eccentricity to this model, the main effect for

*B*= −0.0105 [−0.0162, −0.0050]), whereas the

*B*= −0.0003 [−0.0067, 0.0062]) suggests that the interaction between response dispersion and confidence is similar in fovea and periphery. In summary, our results indicate that confidence is a measure of response precision, and to the extent to which the latter can be considered a proxy for perceptual precision, they are in agreement with Bayesian accounts of metacognition (Meyniel et al., 2015).

*C*

_{n}_{−1}), we find that the coefficient for the latter is

*B*= 0.1874 [0.1445, 0.2307], with

*R*

^{2}= 0.3188. Importantly, if we add the error size of the previous trial (

*E*

_{n}_{−1}) to the model, as well as the

*E*

_{n}_{−1}×

*C*

_{n}_{−1}interaction, the coefficient estimate for

*C*

_{n}_{−1}has a similar (even larger) value:

*B*= 0.2197 [0.1698, 0.2720]. This is also the case when StD

_{n}_{−1}is included in the model, suggesting that the serial dependence in confidence scores is due not only to accuracy or attention fluctuating at timescales of several trials, nor to the direct influence of the StD in the previous stimulus, but rather may be an expression of response inertia or “confidence leak” as described by Rahnev, Koizumi, McCurdy, D'Esposito, and Lau (2015).

*C*) would decrease any attractive pull toward previous history (with respect to variance judgments), whereas confidence in past trials (

_{n}*C*

_{n}_{−}

*) would have the opposite effect. We further reasoned that such effect of confidence in the past trials would apply mostly to very recent trials, whose information represents a more important contribution when priors are iteratively updated. Indeed, this second hypothesis is in agreement with our observation of a positive bias in variance judgments exerted by only the most recent trials (see Figure 2C for an example).*

_{t}*as a function of StD*

_{n}

_{n}_{−1}, plotted separately by confidence in the current (4b) and previous (4c) trial. Confidence scores have been binned into tertiles on a per-participant basis. In Figure 4B, all three plots present an ascending, roughly parallel slope: It appears that serial dependence exerted by Trial

*n*− 1 takes place independently of the confidence placed in the current judgment, contrary to our initial hypothesis. However, when we consider the influence of confidence in the previous response, we do see a striking interaction, in line with what would be expected within a Bayesian framework: Low-confidence

*n*− 1 judgments do not exert any positive serial dependence—quite the opposite, the plot has a slightly descending slope, pointing toward a negative bias in relation to StD

_{n}_{−1}. This slope is mildly ascending for medium confidence and neatly positive only for high-confidence past decisions.

_{n}_{−1}and

*C*(confidence score in the current trial, binned into tertiles) on zRE

_{n}*. Results are presented in Table 3a. The best model contains both main effects but not the interaction (BF*

_{n}_{10}= 349.668), outperforming the model with the interaction term by a factor of BF

_{main/full}= 93.544. This provides very strong evidence against the inclusion of the interaction term and indicates that confidence in the current judgment does not modulate serial dependence from the previous trial.

_{n}_{−1}and

*C*

_{n}_{−1}as within-subject factors. Table 3b presents the results of this analysis. Evidence is in favor of the null model by a large margin (31.25 times more explanatory than the second best, which includes only

*C*

_{n}_{−1}). Nevertheless, when we consider the term of interest for our hypothesis, namely the StD

_{n}_{−1}×

*C*

_{n}_{−1}interaction, there is strong evidence in favor of its inclusion compared to the model stripped of that effect (including only the two main factors): BF

_{full/main}= 26.989. Still, because neither competing model was superior to the null model, this result must be taken with caution.

*n*− 10, influenced serial dependence of variance judgments. We split the data set according to the confidence scores reported in each past position (

*C*

_{n}_{−}

*, discretized into tertiles within each participant's scores), and ran three Bayesian LMMs per position (30 models in total) for the association between StD*

_{t}

_{n}_{−}

*and zRE*

_{t}*at each level of past confidence. Figure 4D presents the*

_{n}*B*coefficient estimates and 95% credible intervals for each Trial

*n*−

*t*(

*t*= 1, …, 10). A marked influence of past confidence on the size and direction of serial dependence is observed, such that when high confidence was reported in very recent trials (

*n*− 1,

*n*− 2), an attractive pull toward recent StD values is manifest, although this bias fades rapidly, being absent by Trial

*n*− 3 and thereafter. Note that trials with highest confidence (upper tertile) do not exert a clear, unambiguous negative bias at any point of trial history, although some traces seem to be present from Trial

*n*− 4 onward. The largest negative bias is driven by low-confidence trials, for which it seems to appear as recently as Trial

*n*− 1 (although the credible intervals contain zero), becomes unambiguous at

*n*− 2, and peaks at

*n*− 4, decreasing afterward—in contrast with the slower buildup of the negative bias seen for past trials with intermediate confidence. Thus, the reversal from positive to negative bias seen in this and previous experiments seems related to the rapid decay of the positive bias of high-confidence trials. As for the negative effect, it seems to appear as early as whenever such competing (positive) bias is not manifest, but fades more slowly. Results were similar when we considered foveal and peripheral blocks separately.

_{10}= 22,288, extreme evidence for the alternative hypothesis), presumably related to subjective trial difficulty. Therefore, we sought to rule out the possibility that the effect of past confidence on serial dependence was related only to the difference in response times, and consequently in interstimulus times. For each trial up to

*n*− 10, we performed a three-way Bayesian repeated-measures ANOVA for zRE

*(as dependent variable) with three within-subject factors: StD*

_{n}

_{n}_{−}

*,*

_{t}*C*

_{n}_{−}

*(in tertiles), and time*

_{t}

_{n}_{,}

_{n}_{−}

*(time between stimulus onset of Trials*

_{t}*n*−

*t*and

*n*, binned in two levels with respect to the median). In all cases, the evidence for inclusion of the StD

_{n}_{−}

*× time*

_{t}

_{n}_{,}

_{n}_{−}

*interaction was extremely low—that is, the Bayes factor for this specific effect was always below 0.01. This suggests that time was not confounding the reported interaction between confidence and serial dependence.*

_{t}*t*test: BF

_{10}> 6.690 × 10

^{7}). As previous work has strongly implicated time between successive stimuli or stimuli and response as critical contributors to serial dependence (Bliss et al., 2017; Fritsche et al., 2017; Kanai & Verstraten, 2005), we sought to take advantage of this circumstance to inquire (post hoc) about the factors that drive the decrease and eventual shift toward negative of the serial-dependence effect as we move backward in trial history.

_{n}_{−}

*(*

_{t}*t*= 1, …, 10) in current variance report as found for Experiments 1 and 3. An extension of this comparison for more distant trial positions is presented and discussed in the Supplementary Materials, section 4 (see Supplementary Figure S3). While the positive bias exerted by StD

_{n}_{−1}is similar in magnitude in both experiments (

*B*= 0.0034 [0.0017, 0.0051] in Experiment 1,

*B*= 0.0030 [0.0018, 0.0042] in Experiment 3), such attraction is still present at StD

_{n}_{−2}in Experiment 1 (

*B*= 0.0014 [0.0003, 0.0026]) but has virtually disappeared in Experiment 3 (

*B*= 0.0003 [−0.0009, 0.0015]). Thus, in Experiment 3 the reversal to negative bias occurs as early as Trial

*n*− 3 and peaks at

*n*− 5 (

*B*= −0.0023 [−0.0036, −0.0010]), with a similar effect size as the maximum negative bias in Experiment 1, which is seen at

*n*− 8 (

*B*= −0.0021 [−0.0032, −0.0010]). As shown in the Supplementary Materials, negative serial dependences also decline and disappear earlier than in Experiment 1. This earlier buildup of the negative bias could be related to the longer interstimulus intervals in the present experiment: Time might, hypothetically, drive the reversal to repulsive serial effects and posterior fading. Results of Experiment 2B (concerning the effect of DIR trials) and on low-confidence trials in Experiment 3 seem to suggest that the negative bias appears as early as whenever the conditions for the arising of a positive bias are not met. If, hypothetically, positive serial dependence declines with time, the negative effect could become evident in an earlier trial in relation to the longer interstimulus times observed in Experiment 3. Another explanation for the earlier shift toward negative in Experiment 3 would be a disruption of the positive bias caused by the additional confidence report—especially if such Bayesian-like pull is caused by decision processes or depends upon memory to some extent.

*C*-report

_{n}_{−}

*, indicating whether or not all intermediate trials between*

_{t}*n*and

*n*−

*t*had a confidence report in addition to a variance report. Note that the content of the reports (i.e., the amount of confidence) did not affect this definition. When participants missed at least one confidence report in the considered historical span of a certain trial, that trial was excluded from the model, in order to make the comparison unambiguous. Subsequently we built 10 Bayesian LMMs for zRE

*(as dependent variable) in relation with three variables defined at each considered point of trial history, namely StD*

_{n}

_{n}_{−}

*, time*

_{t}

_{n}_{,}

_{n}_{−}

*, and*

_{t}*C*-report

_{n}_{−}

*, and all interactions. The fixed-effects*

_{t}*B*coefficients of the StD

_{n}_{−}

*× time*

_{t}

_{n}_{,}

_{n}_{−}

*and StD*

_{t}

_{n}_{−}

*×*

_{t}*C*-report

_{n}_{−}

*interactions are plotted in Figure 5B, for Trials*

_{t}*n*− 1 to

*n*− 10 as predictors of current variance judgment. A negative interaction coefficient would indicate a comparatively less positive (more negative) serial-dependence effect at that position in relation to longer time or the extra report, respectively.

_{n}_{−}

*×*

_{t}*C*-report

_{n}_{−}

*at*

_{t}*n*− 5). However, there is a predominance of negative values for both interaction terms within the recent half of the considered span of trial history, up to Trial

*n*− 5. Thus, although results are inconclusive regarding the causes of the different patterns of serial dependence in Experiments 1 and 3, the mostly negative StD

_{n}_{−}

*× time*

_{t}

_{n}_{,}

_{n}_{−}

*and StD*

_{t}

_{n}_{−}

*×*

_{t}*C*-report

_{n}_{−}

*interactions suggest that both time and the additional confidence report might promote a less positive (more negative) serial dependence in variance and thus contribute to the observed earlier reversal in the direction of the bias. An interesting possibility would be that the dimension-specific, decision-based positive serial dependence is subject to memory decay as well as a decision-capacity bottleneck. The presented data do not conclusively support a particular interpretation, so future experiments are required to elucidate the relative contribution of time itself and additional judgments in shaping the effects of trial history.*

_{t}

_{n}_{−2}stimulus on the current response (which arises only when a high-confidence judgment about variance was made in Trial

*n*− 2) is much weaker, on average, than that of StD

_{n}_{−1}. When inquiring into the factors (interposed between StD

_{n}_{−2}and the current response) that might explain this decline, we failed to find any difference based on the

*type*of decision that was made in the following trial (

*n*− 1): In Experiment 2B, the magnitude of the effect of StD

_{n}_{−2}(when a variance judgment was made at that point) did not appear to depend upon whether a decision in Trial

*n*− 1 was made about the variance or the mean of the stimulus. However, if the

*number*of interposing decisions was increased, and an additional decision (about confidence) was required in Trial

*n*− 1, the positive effect of Trial

*n*− 2 was greatly diminished. This apparent relationship with quantity but not quality of subsequent decisions (made after the one that exerts the bias and before the one that is biased) suggests that serial dependence may be limited by an amodal decision-capacity bottleneck. The apparent fading of the effect with time also points to some sort of memory limitation. Note, however, that these considerations arise from post hoc analyses which revealed only suggestive trends, although the evidence was not conclusive in any case. The factors contributing to the disruption or fading of positive serial dependence in relation with more remote presentations are deserving of further research.

*perceptual sensitization*.

*n*− 9 could seem unusual for a sensory aftereffect. However, negative aftereffects in response to subsecond stimuli have been described previously (Fritsche et al., 2017; Kanai & Verstraten, 2005), and sometimes lasting for several seconds (Fritsche et al., 2017). Fritsche et al. (2017) have proposed that it is not the stimulus itself but a memory trace that causes the negative aftereffect on orientation. It is likely that the observed relationship between the current trial and a specific trial in history (e.g.,

*n*− 5) is actually driven by a broader, averaged contextual representation and not by the individual stimuli several trials removed from the present. In our case, as we dealt with a more abstract dimension, we might not consider this high-level aftereffect strictly sensory in the first place (Storrs, 2015). As stated previously, some aspects of this negative bias could point to a decisional component, including its independence of retinal location, predominance in low-confidence trials, and seemingly smaller size when a different decision was required in the past (DIR trials in Experiment 2B; note, however, that the interaction with trial type was not significant). In any case, the line between perceptual and postperceptual aftereffects may be blurred with respect to statistical properties (Payzan-LeNestour et al., 2016; Storrs, 2015).

*, 8 (4), 1–18, https://doi.org/10.1177/2041669517718697.*

*i-Perception**, 21 (4), 560–567, https://doi.org/10.1177/0956797610363543.*

*Psychological Science**, 15 (3), 122–131, https://doi.org/10.1016/j.tics.2011.01.003.*

*Trends in Cognitive Sciences**, 106 (18), 7345–7350, https://doi.org/10.1073/pnas.0808981106.*

*Proceedings of the National Academy of Sciences, USA**, 12 (2), 157–162.*

*Psychological Science**, 9 (12): 13, 1–8, https://doi.org/10.1167/9.12.13. [PubMed] [Article]*

*Journal of Vision**, 7 (1): 14739, https://doi.org/10.1038/s41598-017-15199-7.*

*Scientific Reports**, 15 (15): 6, 1–24, https://doi.org/10.1167/15.15.6. [PubMed] [Article]*

*Journal of Vision**, 25 (7), 1394–1403, https://doi.org/10.1177/0956797614532656.*

*Psychological Science**, 11, 833–840.*

*Vision Research**, 43 (4), 393–404.*

*Vision Research**, 45, 891–900.*

*Vision Research**, 111 (21), 7867–7872, https://doi.org/10.1073/pnas.1402785111.*

*Proceedings of the National Academy of Sciences, USA**, 17 (14): 6, 1–9, https://doi.org/10.1167/17.14.6. [PubMed] [Article]*

*Journal of Vision**, 138 (2), 289–301.*

*Acta Psychologica**, 20 (2), 211–231, https://doi.org/10.1080/13506285.2012.657261.*

*Visual Cognition**, 66 (6), 937–948, https://doi.org/10.1016/j.neuron.2010.05.018.*

*Neuron**, 10 (3), e0120870, https://doi.org/10.1371/journal.pone.0120870.*

*PLoS One**, 108 (32), 13341–13346, https://doi.org/10.1073/pnas.1104517108.*

*Proceedings of the National Academy of Sciences, USA**, 412 (6849), 787–792.*

*Nature**, 17, 738–743, https://doi.org/10.1038/nn.3689.*

*Nature Neuroscience**, 70 (3), 456–464.*

*Perception & Psychophysics**, 14 (9), 1195–1201, https://doi.org/10.1038/nn.2889.*

*Nature Neuroscience**, 27, 1–6, https://doi.org/10.1016/j.cub.2017.01.006.*

*Current Biology**, 59, 167–192, https://doi.org/10.1146/annurev.psych.58.110405.085632.*

*Annual Review of Psychology**, 15 (4): 16, 1–11, https://doi.org/10.1167/15.4.16. [PubMed] [Article]*

*Journal of Vision**, 35 (3), 718–734, https://doi.org/10.1037/a0013899.*

*Journal of Experimental Psychology: Human Perception and Performance**, 43 (7), 663–676, https://doi.org/10.1068/p7719.*

*Perception**, 36 (23), 6186–6192.*

*The Journal of Neuroscience**, 45 (2005), 3109–3116, https://doi.org/10.1016/j.visres.2005.05.014.*

*Vision Research**, 21 (7), 493–497.*

*Trends in Cognitive Sciences**, 97, 3155–3164, https://doi.org/10.1152/jn.00086.2007.*

*Journal of Neurophysiology**, 24 (21), 2569–2574, https://doi.org/10.1016/j.cub.2014.09.025.*

*Current Biology**, 6: 28563, https://doi.org/10.1038/srep28563.*

*Scientific Reports**, 7 (1), 1971, https://doi.org/10.1038/s41598-017-02201-5.*

*Scientific Reports**. Cambridge, MA: MIT Press.*

*The motion aftereffect: A modern perspective**, 15 (4): 6, 1–18, https://doi.org/10.1167/15.4.6. [PubMed] [Article]*

*Journal of Vision**, 31 (4), A93–A102, https://doi.org/10.1364/JOSAA.31.000A93.*

*Journal of the Optical Society of America A: Optics, Image Science and Vision**, 88, 78–92, https://doi.org/10.1016/j.neuron.2015.09.039.*

*Neuron**, 111 (21), 7873–7878, https://doi.org/10.1073/pnas.1308674111.*

*Proceedings of the National Academy of Sciences, USA**, 8 (11): 9, 1–8, https://doi.org/10.1167/8.11.9. [PubMed] [Article]*

*Journal of Vision**, 279, 2754–2760, https://doi.org/10.1098/rspb.2011.2645.*

*Proceedings of the Royal Society of London, Series B: Biological Sciences**, 26, 1–5, https://doi.org/10.1016/j.cub.2016.04.023.*

*Current Biology**, 31 (47), 17220–17229.*

*The Journal of Neuroscience**, 26 (11), 1664–1680, https://doi.org/10.1177/0956797615595037.*

*Psychological Science**, 114 (2), 412–417, https://doi.org/10.1073/pnas.1610706114.*

*Proceedings of the National Academy of Sciences, USA**PsyArxiv*, https://doi.org/10.17605/OSF.IO/6BKDA.

*, 282, 20142833, https://doi.org/10.1098/rspb.2014.2833.*

*Proceedings of the Royal Society B**, 12 (4): 14, 1–17, https://doi.org/10.1167/12.4.14. [PubMed] [Article]*

*Journal of Vision**, 14 (13): 13, 1–13, https://doi.org/10.1167/14.13.13. [PubMed] [Article]*

*Journal of Vision**, 42 (5), 671–682, https://doi.org/10.1037/xhp0000179.*

*Journal of Experimental Psychology: Human Perception and Performance**, 6: 157, 151–154, https://doi.org/10.3389/fpsyg.2015.00157.*

*Frontiers in Psychology**, 15, 745–756, https://doi.org/10.1038/nrn3838.*

*Nature Reviews Neuroscience**, 13 (9), 403–409, https://doi.org/10.1016/j.tics.2009.06.003.*

*Trends in Cognitive Sciences**, 25 (10), 1903–1913, https://doi.org/10.1177/0956797614544510.*

*Psychological Science**, 6: 32239, https://doi.org/10.1038/srep32239.*

*Scientific Reports**, 25 (1), 58–76, https://doi.org/10.3758/s13423-017-1323-7.*

*Psychonomic Bulletin & Review**, 48 (10), 3110–3120, https://doi.org/10.1016/j.neuropsychologia.2010.06.023.*

*Neuropsychologia**, 15 (4): 11, 1–13, https://doi.org/10.1167/15.4.11. [PubMed] [Article]*

*Journal of Vision**, 15 (12): 1219, https://doi.org/10.1167/15.12.1219. [Abstract]*

*Journal of Vision**, 15 (12): 770, https://doi.org/10.1167/15.12.770. [Abstract]*

*Journal of Vision**, 27, 246–253, https://doi.org/10.1016/j.concog.2014.05.012.*

*Consciousness & Cognition*