**Adding noise to sensory signals generally decreases human performance. However, noise can improve performance too, through a process called stochastic resonance (SR). This paradoxical effect may be exploited in psychophysical experiments to provide insights into how the sensory system processes noise. Here, I develop an extension on signal detection theory to model stochastic resonance. I show that the inclusion of lapse rate allows for the occurrence of stochastic resonance in terms of the performance metric d′, when the criterion is set suboptimally. High levels of lapse rate, however, cause stochastic resonance to disappear. It is also shown that noise generated in the brain (i.e., internal noise) may obscure any effect of stochastic resonance in experimental settings. I further relate the model to a standard equivalent noise model, the linear amplifier model, and show that lapse rate scales the threshold versus noise (TvN) curve, similar to the efficiency parameter in equivalent noise (EN) models. Therefore, lapse rate provides a psychophysical explanation for reduced efficiency in EN paradigms. Furthermore, I note that ignoring lapse rate may lead to an overestimation of internal noise in EN paradigms. Overall, describing stochastic resonance in terms of signal detection theory, with the inclusion of lapse rate, may provide valuable new insights into how human performance depends on internal and external noise. It may have applications in improving human performance in situations where the criterion is set suboptimally, and it may provide additional insight into internal noise hypotheses related to autism spectrum disorder.**

*d*′ is signal-to-noise ratio, where increases in noise decrease

*d*′ (Green & Swets, 1974). While noise is expected to degrade performance in general, noise can actually improve performance. For example, noise can push a subthreshold signal—that would normally lead to chance performance in behavioral tasks—above threshold, and thereby lead to above-chance level performance, an effect called stochastic resonance (SR; McDonnell & Abbott, 2009). Note that SR here refers to any occasion where noise increases performance, and not just cases with periodic input (McDonnell & Abbott, 2009). SR causes optimal performance (i.e., highest detectability) to be reached at non-zero levels of noise. This has potentially important implications, because it suggests that inducing SR by adding noise can be used to boost performance of humans and machines. Because of this potential beneficial effect of noise, it is useful to study SR in more detail, specifically in relation to human performance.

*σ*is assumed to be a combination of internal noise

*σ*

_{int}and externally added noise

*σ*

_{ext}, whose variances add:

*H*is the hit rate, and

*F*is the false alarm rate,

*c*is a criterion (i.e., threshold),

*s*is the signal strength,

*n*is the mean noise strength,

*s*or

*n*) and variance (

*σ*

^{2}). The variable

*σ*represents the noise in the system. This model produced SR for accuracy (

*p*), when

*c*>

*s*.

*d*′, another measure of (human) performance, increased to infinity. Although this may make sense when

*d*′ is interpreted as a signal-to-noise ratio, it does not when

*d*′ is considered a metric of human performance. As a measure of human performance,

*d*′ should drop to chance level when noise is decreased, just as accuracy does. This is so because

*s*<

*c*, so that the signal is smaller than threshold. Therefore, I extended the model presented by Gong et al. (2002), with another human characteristic, namely lapse rate

*λ*.

*d*′ is an unbiased measure of performance,

*λ*is the lapse rate (0 ≤

*λ*≤ 1/2), Φ

^{–1}is the inverse of the cumulative normal distribution function. The variable

*σ*again represents the noise in the system. For our purposes, it is important to realize that this noise can have an external (

*σ*

_{ext}) or internal (brain-generated;

*σ*

_{int}) origin, where

*σ*

_{ext}is under control of the experimenter, while

*σ*

_{int}is an internal property of the system under study, that is, the human participant.

*λ*represents the proportion of guesses that a participant is required to make, due to not paying attention to the stimulus. In practice, it will also include other factors, such as accidental response errors (accidentally pressing a response key that does not match the percept). When 0 <

*s*<

*c*(i.e., subthreshold

*s*) and

*σ*is small,

*H*and

*F*approach

*λ*, and

*d*′ approaches zero, as required for a measure of performance. Importantly, without the inclusion of

*λ*, both

*H*and

*F*approach zero as

*σ*decreases, but

*F*does so much faster than

*H*, because

*n*<

*s*, leading

*d*′ to increase drastically (see Equation 6).

*λ*> 0, the model can produce SR for

*d*′, as shown in Figure 1 in which the dependence of

*d*′ on

*σ*is shown for various levels of signal strength

*s*(indicated by the boxed numbers), with

*λ*= 0.01, and

*c*= 1. When signal strength is below criterion

*c*(i.e.,

*s*< 1), which without SR would lead to chance performance, performance rises above chance (i.e.,

*d*′ > 0) for intermediate levels of

*σ*. This is a form of stochastic resonance. At small

*σ*,

*d*′ does not increase to infinity (as was found by Gong et al. [2002]), but instead

*s*<

*c*) lead to chance performance, and suprathreshold performance is limited by attentional lapse rate.

*d*′ of 1.

*s*, one can calculate the threshold signal strength

*s*

_{th}, and thus TvN curves. Using Mathematica® 10, it was found that

^{–1}are the error function, and inverse error function, respectively. Note that we will use the superscript SR to refer to

*s*

_{th}specifically in Equation 8a. When referring to the threshold signal strength more generally, no superscript it used. In the remainder of this paper, we will assume

*n*= 0.

*λ*= 0, this equation reduces to a straight line

*λ*> 0, SR is possible. Equation 8a is plotted for various parameter combinations in Figure 2. An (arbitrary) base-line curve (solid blue) describes a characteristic threshold-versus-noise (TvN) function for when SR occurs: detection thresholds are lowest at intermediate noise levels

*σ*. In the following sections, the influence of various factors on the shape of this curve, and the appearance of stochastic resonance, are investigated. Similar-looking curves are also observed in various masking paradigms (Solomon, 2009). These curves, generally, plot increment versus standard values (i.e., Δ

*I*vs.

*I*), and not threshold-versus-noise, and it is not currently known whether these two curves reflect a similar underlying mechanism, although some evidence suggests that they do (Goris et al., 2008).

*λ*is investigated. With an increase in

*λ*(green line;

*λ*= 0.07) relative to the reference curve, SR decreases in magnitude, and the point of maximal SR moves to higher noise levels. The maximum level of

*λ*before SR is lost can be calculated by setting Equation 8b to zero. Solving for

*λ*in the limit of zero noise gives the maximum value of

*λ*(i.e.,

*λ*

_{max}):

*λ*

_{max}on

*d*′. At

*d*′ = 1, a typical value of

*d*′ in psychophysical experiments

*λ*

_{max}is high compared to typical lapse rates around 0.06 or less (Wichmann & Hill, 2001), and suggests that

*λ*is not a limiting factor on the occurrence of SR in many experiments.

*λ*in Figure 2 remain present at large values of

*σ*, while other factors, discussed later, do not have a major effect at large

*σ*.

*s*and

*n*, and SR cannot take place. However,

*c*is often set suboptimally (Green & Swets, 1974). The level at which criterion

*c*is set, has a strong influence on the vertical position of the TvN curve. With a higher

*c*, the curve moves upward, and maximal SR moves to higher noise levels (Figure 2, red line). In a biological system, the setting of this decision boundary

*c*is complex, and still not completely understood. The parameter

*c*would sensibly be set at a level that is high enough to prevent many false positives, but low enough to prevent too many misses. Thus, where

*c*is put determines how liberal or conservative the decision stage is. How could suboptimal settings of

*c*arise? One possibility is that when the signal is weak, inherent noise in the decision stage could have a large influence on the setting of

*c*, something that is generally ignored in signal detection theory. See, for example, Gravetter and Lockhead (1973) and Torgerson (1958) for models discussing the influence of criterion noise in classification tasks. For example, a “present” response could have the requirement that the decision signal is larger than the

*P*th percentile of the response distribution at the decision stage in the absence of input (i.e., purely noise-driven activity in the decision stage, here assumed to be normally distributed). Assuming an unbiased (mean = 0) but noisy response originating from the decision stage, the criterion should be set at

*c*can be interpreted as reflecting noise in a post-sensory decision stage, which contrasts with the setting of

*c*in signal detection theory (Green & Swets, 1974; Macmillan & Creelman, 2004). If

*σ*

_{d}increases, so does

*c*, which results in an upward shift in the detection threshold

*s*

_{th}, as can be observed in Figure 2. Conversely, because SR can only occur when

*c*>

*s*, systems with low decision noise (or alternatively, very liberal systems with a low

*P*) are unlikely to show SR. The condition for SR is that

*σ*is the total amount of noise in the sensory-perceptual system. One can subdivide

*σ*into two independent components: (a) noise that is external to the system

*σ*

_{ext}, and, for example, is present in the stimulus; and (b) noise that is internally generated

*σ*

_{int}. Their variances add. In the equivalent noise paradigm, such as the linear amplifier model (Lu & Dosher, 2008), this is often rewritten as:

*σ*

_{int}on detection thresholds (Figure 2, purple line). Increases in

*σ*

_{int}result in a leftward shift of the TvN curve relative to the reference curve. The increase in

*σ*

_{int}simultaneously results in lower thresholds (better performance) at low

*σ*

_{ext}, while it increases thresholds at high

*σ*

_{ext}. The former result is due to

*σ*

_{int}itself causing SR relative to the condition where

*σ*

_{int}= 0. In effect, it is moving the SR dip to the left. With even higher levels of internal noise, the curve is shifted so far to the left that the upward arm of the curve at low noise levels disappears. Even though

*σ*

_{int}still increases performance relative to the reference curve in this case, and thus shows SR, it will not be recognized as such because it does not present itself as a dip in the TvN curve. I come back to this in the discussion. Increasing

*σ*

_{int}even further will remove SR completely.

*c*, as in the derivations above, the equivalent noise paradigm assumes that the decision criterion is set optimally; that is,

*c*=

*s*/2. When this is inserted into Equation 6, and deriving the detection threshold dependent on performance level (

*d*′) and lapse rate (

*λ*), we obtain:

*λ*= 0 (one of the assumptions often made) results in the standard dependence of the detection threshold on internal and external noise

*d*′ is set to 1; in fact, setting

*λ*= 0 in Equation 8a will result in this same solution, independent on the setting of

*c*). This dependence is plotted in Figure 2 (dashed curve). Due to the fact that

*c*<

*s*, there is no SR possible in this description of human performance.

*η*when fitting the curve to experimental data:

*η*< 0, which scales the curve up (i.e., higher detection thresholds).

*λ*provides a scaling factor in our model, similar to

*η*in equivalent noise paradigms. This relationship makes intuitive sense, as an increased number of lapses decreases efficiency:

*λ*:

*d*′/

*η*(see Equation 16) to derive the relationship between

*λ*and

*η*, one finds that

*d*′. These results show that lower efficiency can be captured by increased lapse rates.

*η*can be expressed in terms of

*λ*, and thus that lapse rate

*λ*may explain at least part of the suboptimal efficiency (i.e.,

*η*< 1) that is often reported in psychophysical experiments.

*n*> 1 represents a pooling parameter, which replaces

*η*. Equation 21, in essence, describes an important property of the central limit theorem, and

*n*quantifies the number of samples that are taken to estimate a mean. For example, how many individual moving dots are combined to estimate a global pattern motion (e.g., Dakin, Mareschal, & Bex, 2005).

*σ*with

*c*=

*s*/2,

*λ*= 0:

*n*is equal to

*η*

^{2}.

*λ*= 0, here, because otherwise both

*η*(or

*λ*) and

*n*need to be determined, and as both have the same scaling effect, the system would be underdetermined. However, in experimental settings, unless participants are experienced and extremely motivated, it is unlikely that

*λ*= 0, and thus a correct value of the pooling parameter

*n*can only be obtained if

*λ*is also determined independently. One way of doing that is to run trials without any external noise, and with a high stimulus strength (i.e., a very easy detection task). The proportion of incorrect trials in this condition reflects the lapse rate, as internal noise would not be large enough to influence this performance.

*σ*

_{int}condition, one can observe that the suboptimal positioning of the criterion

*c*in the latter case caused an expected increased detection threshold at low external noise values (left side of the plot). An important consequence is that the TvN curves are quite different for these two models even though the internal noise is the same. Consequently, fitting the equivalent noise function to experimental data with SR will give incorrect (over)estimates of internal noise.

*σ*

_{ext}= 0. Then, while taking the noise in the SR equation as the actual internal noise (

*σ*

_{int}) and that in the EN equation as the estimated internal noise (

*λ*is relatively low, the estimated internal noise is not even monotonically related to the actual internal noise. Therefore, the EN paradigm overestimates low internal noise. The precise amount of overestimation depends on various parameters (e.g.,

*λ*,

*c*,

*d*′), and on the distribution of external noise values tested in experimental settings, and how much the dip caused by SR influences the EN model fit.

*d*′, because data in terms of accuracy have previously been reported on in a similar model without

*λ*(Gong et al., 2002). A direct comparison between the current model and that of Gong et al. (2002) in terms of

*d*′ and accuracy is made in Figure 6. As one can see, the largest difference is in terms of

*d*′, and not in terms of accuracy. In fact, the level of above-chance accuracy in our model is 1 − 2

*λ*times that of the model where

*λ*= 0 (Gong et al., 2002). This fixed relationship means the analyses by Gong et al. (2002), which rested on finding the point where the derivative of accuracy relative to

*σ*was 0, will remain valid in the current model.

*d*′ in this case is approximately 1. The fits for the SR model and for the equivalent noise model are presented in blue and red, respectively, in Figure 7. These fits clearly show that the SR model captures the SR behavior at low noise levels better than the EN model. Comparing the fits, however, shows that at high noise levels, the EN model fits better. This is because the EN model allows for pooling, which will scale down the position of the curve, and will allow the model to fit the rising part of the data better. The SR model can be extended in a similar way (Figure 7, beige curve), which then fits the raising arm much better. I did not perform extensive model comparisons, but AIC measures indicated that, overall, the EN fitted the data better than the SR model, but the SR′ fitted the data on average equally well on a session-by-session basis.

*d*′ of 1 for all participants and conditions. These were then fitted with the same three models (EN, SR, and SR′), with the only difference being that lapse rate values for the SR′ model were estimated from the independent double pass experiment. Lapse rate was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent. This approach to estimate lapse rate presents a way to reduce the number of free parameters in the model. Overall, the same conclusions apply to this second dataset, in that EN fitted the data better than the SR model, but the SR′ fitted the data on average equally well on a subject by subject basis.

*d*′, in addition to accuracy. The model produces experimentally observed threshold-versus-noise functions, when plotting threshold signal strength versus noise (TvN). I compared the current model of SR to the equivalent noise paradigm, and show that the lapse rate may be used to explain the suboptimal “efficiency” that is often found in experimental paradigms, as it scales the TvN curve up. I also argue that fitting data with the equivalent noise approach when SR is present can lead to overestimates of the level of internal noise. Indeed, our model fits showed that the SR model showed a 2-times to about 10-times reduction of internal noise estimates compared to the equivalent noise paradigm.

*d*′ = 1, the maximum lapse rate is about 0.16 before SR is lost. This is relatively high, and unlikely to be reached in most psychophysical experiments, suggesting that SR could be observed in many experiments. However, lapse rates increase when people perform dual-tasks (Buckley, Helton, Innes, Dalrymple-Alford, & Jones, 2016), and when they are tired (Anderson et al., 2012), and thus even with a relatively lenient performance level of

*d*′ = 1, SR may not be observed. When more stringent performance levels are employed such as

*d*′ = 2, or

*d*′ = 3, the maximum lapse rate rapidly declines to ∼0.023, and ∼0.0013, which are at or below typically observed lapse rates (Wichmann & Hill, 2001), resulting in weak or absent SR.

*σ*

_{int}). When internal noise is small it allows for SR, but when internal noise is large, SR will disappear. This effect potentially explains why SR is not observed more often than it is. Low amounts of

*σ*

_{int}cause SR in their own right, just as external noise does. In Figure 2, this can be observed as a decrease in thresholds at low external noise (on the left of the plot), compared to the reference curve. In psychophysical settings, however, this would not be interpreted as

*σ*

_{int}-induced SR (but see, Aihara, Kitajo, Nozaki, & Yamamoto, 2008), because in most experiments, internal noise is a fixed value that is not manipulated (and thus a reference curve is lacking).

*Vision Research*, 48 (14), 1569–1573.

*Sleep*, 35 (8), 1137–1146.

*Conscious Cognition*, 45, 174–183.

*Nature*, 383 (6603), 770.

*Vision Research*, 45 (24), 3027–3049.

*Neuron*, 75 (6), 981–991.

*Physical Review*.

*E, Statistical, Nonlinear, and Soft Matter Physics*, 65 (3 Pt 1): 031904.

*Journal of Vision*, 8 (15): 17, 1–21, https://doi.org/10.1167/8.15.17. [PubMed] [Article]

*Psychological Review*, 80 (3), 203–216.

*Signal detection theory and psychophysics*(rev. ed.). Huntington, NY: R. F. Krieger.

*Vision Research*, 46 (15), 2315–2327.

*Vision Research*, 38 (9), 1183–1198.

*Psychological Review*, 115 (1), 44–82.

*Detection theory: A user's guide*. Location: Psychology Press.

*Vision Research*, 48 (16), 1719–1725.

*PLoS Computational Biology*, 5 (5): e1000348.

*Nature Reviews Neuroscience*, 12 (7), 415–425.

*Frontiers in Psychology*, 2: 51.

*Clinical Neurophysiology*, 115 (2), 267–281.

*RCA Review*, 6 (3), 332–343.

*Effects of visual noise*(Unpublished doctoral thesis). Cambridge University.

*Behavioral and Brain Sciences*, 41, 1–66.

*Nature Neuroscience*, 14 (12), 1513–1515.

*ACM SIGMOBILE Mobile Computing and Communications Review*, 5 (1), 3–55.

*Vision Research*, 49 (22), 2705–2739.

*Physical Review Letters*, 78 (6), 1186–1189.

*Attention, Perception, & Psychophysics*, 71 (3), 435–443.

*Theory and methods of scaling*. Oxford, UK: Wiley.

*Vision Research*, 141, 30–39.

*Automata Studies*, 34, 43–98.

*AIP Conference Proceedings*, 800, 245–252.

*Biological Cybernetics*, 87 (2), 91–101.

*Perception & Psychophysics*, 63 (8), 1293–1313.