Open Access
Article  |   November 2019
Modeling stochastic resonance in humans: The influence of lapse rate
Author Affiliations
  • Jeroen J. A. van Boxtel
    School of Psychology, Faculty of Health, University of Canberra, Bruce, Australia
    School of Psychological Sciences and Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton, Australia
    j.j.a.vanboxtel@gmail.com
Journal of Vision November 2019, Vol.19, 19. doi:https://doi.org/10.1167/19.13.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeroen J. A. van Boxtel; Modeling stochastic resonance in humans: The influence of lapse rate. Journal of Vision 2019;19(13):19. https://doi.org/10.1167/19.13.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Adding noise to sensory signals generally decreases human performance. However, noise can improve performance too, through a process called stochastic resonance (SR). This paradoxical effect may be exploited in psychophysical experiments to provide insights into how the sensory system processes noise. Here, I develop an extension on signal detection theory to model stochastic resonance. I show that the inclusion of lapse rate allows for the occurrence of stochastic resonance in terms of the performance metric d′, when the criterion is set suboptimally. High levels of lapse rate, however, cause stochastic resonance to disappear. It is also shown that noise generated in the brain (i.e., internal noise) may obscure any effect of stochastic resonance in experimental settings. I further relate the model to a standard equivalent noise model, the linear amplifier model, and show that lapse rate scales the threshold versus noise (TvN) curve, similar to the efficiency parameter in equivalent noise (EN) models. Therefore, lapse rate provides a psychophysical explanation for reduced efficiency in EN paradigms. Furthermore, I note that ignoring lapse rate may lead to an overestimation of internal noise in EN paradigms. Overall, describing stochastic resonance in terms of signal detection theory, with the inclusion of lapse rate, may provide valuable new insights into how human performance depends on internal and external noise. It may have applications in improving human performance in situations where the criterion is set suboptimally, and it may provide additional insight into internal noise hypotheses related to autism spectrum disorder.

Introduction
The brain is an inherently noisy system; much of the brain's activity is not driven by external stimulation or by purposeful internal processes, but by seemingly random activity: noise. Noise poses a fundamental problem for information processing (Von Neumann, 1956; Shannon, 2001) as it increases variability and limits the clarity of a signal. Yet, given the abundance of noise in neural processing, the brain still achieves remarkably stable perception, presumably because the brain adapted to its own noisiness and that of its inputs. Therefore, studying how the brain responds to noise may help reveal the internal workings of the brain. 
Noise is often considered to limit optimal performance (Von Neumann, 1956; Shannon, 2001). Indeed, the very definition of the performance metric d′ is signal-to-noise ratio, where increases in noise decrease d′ (Green & Swets, 1974). While noise is expected to degrade performance in general, noise can actually improve performance. For example, noise can push a subthreshold signal—that would normally lead to chance performance in behavioral tasks—above threshold, and thereby lead to above-chance level performance, an effect called stochastic resonance (SR; McDonnell & Abbott, 2009). Note that SR here refers to any occasion where noise increases performance, and not just cases with periodic input (McDonnell & Abbott, 2009). SR causes optimal performance (i.e., highest detectability) to be reached at non-zero levels of noise. This has potentially important implications, because it suggests that inducing SR by adding noise can be used to boost performance of humans and machines. Because of this potential beneficial effect of noise, it is useful to study SR in more detail, specifically in relation to human performance. 
In humans, the influence of noise on performance or perception is often investigated using paradigms in which external noise is added to the signal, and performance thresholds are measured (Lu & Dosher, 2008). A common method is called the equivalent noise (EN) paradigm, which is used to estimate internal (brain-generated) noise (Lu & Dosher, 2008), and has its origins in engineering to measure noise dependence in electronic amplifiers (North, 1942). In the EN paradigm, the overall noise σ is assumed to be a combination of internal noise σint and externally added noise σext, whose variances add: Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({\sigma ^2} = \sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2\). In experimental settings, various amounts of external noise are added to a signal and the effects on detection thresholds are mapped out. When Display Formula\({\sigma _{{\rm{ext}}}} \ll {\sigma _{{\rm{int}}}}\), only internal noise limits performance. At high levels of external noise (Display Formula\({\sigma _{{\rm{ext}}}} \gg {\sigma _{{\rm{int}}}}\)), performance is determined by external noise. At intermediate amounts of noise, the threshold-versus-noise (TvN) curve shows an elbow separating the two regimes, which is located at the level where internal and external noise are equivalent; the location of the elbow thus represents the level of internal noise. 
Although extensively used, the EN paradigm often only considers two parameters of interest (i.e., internal noise and efficiency). Although several extensions on the basic EN paradigm exist (Lu & Dosher, 2008) they almost invariably assume that the brain is optimal at setting decision thresholds (but see, e.g., Ward & Kitajo, 2005). However, human observers do not always set decision thresholds optimally (Green & Swets, 1974; Rahnev & Denison, 2018). This suboptimal behavior allows for SR (Gong, Matthews, & Qian, 2002), which indeed has been reported in human observers (Collins, Imhoff, & Grigg, 1996; Goris, Wagemans, & Wichmann, 2008; McDonnell & Ward, 2011; Moss, Ward, & Sannita, 2004; Simonotto et al., 1997; Ward, Neiman, & Moss, 2002). The fact that SR occurs indicates that the effects of noise are more complicated than often modeled in EN approaches. (When accurate detection is weighted more heavily than gaining maximal reward, it is possible to achieve optimal detection, even when the criterion is nonoptimal, and, consequently reward is suboptimal; I do not consider these cases in this report. Here, I assume that observers strive for maximum reward, and that the pay-off matrix is symmetric; that is, observers receive equal rewards and costs for hits and misses.) 
Stochastic resonance has been extensively investigated from an engineering point of view, but less so in humans. A good description of SR in humans is more complicated because the human sensory system presents some challenges in researching stochastic resonance. First, the noise may originate from one of many processing levels in the brain, from early sensory, to later decisional stages, each potentially having different effects on performance. Second, humans are not machines, and they suffer from attentional lapses (induced by, for example, decreased arousal or vigilance). The influence of attentional lapses has not been studied in the stochastic resonance literature, and in fact is often ignored. Lapse rate is not part of standard signal detection theory, and though its existence is acknowledged, it has been considered of minimal influence in most instances (Macmillan & Creelman, 2004). However, to determine the optimal level of external noise to achieve optimal performance (i.e., maximal SR), these two specifically human challenges need to be better understood. 
Here, I model the influence of noise in sensory signals and decision criteria. To provide further insight into the usefulness of SR as a measure of the influence of noise on human perception, I will compare this model to the EN paradigm (Lu & Dosher, 2008; Pelli, 1981), and fit it to previously published experimental data. 
A signal detection model of stochastic resonance
Previous work has shown that signal detection theory can provide a good framework for modeling stochastic resonance in human behavior. Gong et al. (2002) used the following equations:  
\begin{equation}\tag{1}H = \int_c^\infty {\cal N} (s,{\sigma ^2})\end{equation}
 
\begin{equation}\tag{2}F = \int_c^\infty {\cal N} (n,{\sigma ^2})\end{equation}
 
\begin{equation}\tag{3}p = {1 \over 2}(H + (1 - F))\end{equation}
where H is the hit rate, and F is the false alarm rate, c is a criterion (i.e., threshold), s is the signal strength, n is the mean noise strength, Display Formula\({\cal N}(.)\) is the normal distribution with a certain mean (s or n) and variance (σ2). The variable σ represents the noise in the system. This model produced SR for accuracy (p), when c > s.  
This model also predicted that in the limit of zero noise, d′, another measure of (human) performance, increased to infinity. Although this may make sense when d′ is interpreted as a signal-to-noise ratio, it does not when d′ is considered a metric of human performance. As a measure of human performance, d′ should drop to chance level when noise is decreased, just as accuracy does. This is so because s < c, so that the signal is smaller than threshold. Therefore, I extended the model presented by Gong et al. (2002), with another human characteristic, namely lapse rate λ.  
\begin{equation}\tag{4}H = \lambda + (1 - 2\lambda )\int_c^\infty {\cal N} (s,{\sigma ^2})\end{equation}
 
\begin{equation}\tag{5}F = \lambda + (1 - 2\lambda )\int_c^\infty {\cal N} (n,{\sigma ^2})\end{equation}
 
\begin{equation}\tag{6}d^{\prime} = {\Phi ^{ - 1}}(H) - {\Phi ^{ - 1}}(F).{\rm{\ }}\end{equation}
where d′ is an unbiased measure of performance, λ is the lapse rate (0 ≤ λ ≤ 1/2), Φ–1 is the inverse of the cumulative normal distribution function. The variable σ again represents the noise in the system. For our purposes, it is important to realize that this noise can have an external (σext) or internal (brain-generated; σint) origin, where σext is under control of the experimenter, while σint is an internal property of the system under study, that is, the human participant.  
The lapse rate λ represents the proportion of guesses that a participant is required to make, due to not paying attention to the stimulus. In practice, it will also include other factors, such as accidental response errors (accidentally pressing a response key that does not match the percept). When 0 < s < c (i.e., subthreshold s) and σ is small, H and F approach λ, and d′ approaches zero, as required for a measure of performance. Importantly, without the inclusion of λ, both H and F approach zero as σ decreases, but F does so much faster than H, because n < s, leading d′ to increase drastically (see Equation 6). 
When λ > 0, the model can produce SR for d′, as shown in Figure 1 in which the dependence of d′ on σ is shown for various levels of signal strength s (indicated by the boxed numbers), with λ = 0.01, and c = 1. When signal strength is below criterion c (i.e., s < 1), which without SR would lead to chance performance, performance rises above chance (i.e., d′ > 0) for intermediate levels of σ. This is a form of stochastic resonance. At small σ, d′ does not increase to infinity (as was found by Gong et al. [2002]), but instead  
\begin{equation}\tag{7}\mathop {\lim }\limits_{\sigma \to 0} d^{\prime} = \left\{ {\matrix{ {0,} \hfill&{{\rm{if\ }}s \lt c} \hfill \cr { - {\Phi ^{ - 1}}(\lambda ),} \hfill&{{\rm{if\ }}s = c} \hfill \cr { - 2{\Phi ^{ - 1}}(\lambda )} \hfill&{{\rm{if\ }}s \gt c.} \hfill \cr } } \right.\end{equation}
 
This behavior is consistent with human performance, where subthreshold signals (i.e., s < c) lead to chance performance, and suprathreshold performance is limited by attentional lapse rate. 
Figure 1
 
Dependence of d′ on noise (σ), and signal strength (s). Parameters are λ = 0.01, and c = 1, while the different lines are solutions at different values of s (indicated with the boxed numbers).
Figure 1
 
Dependence of d′ on noise (σ), and signal strength (s). Parameters are λ = 0.01, and c = 1, while the different lines are solutions at different values of s (indicated with the boxed numbers).
Detection thresholds
In psychophysical experiments, it is common to determine the stimulus strength at which performance reaches some predefined performance levels at various levels of externally applied noise. Common performance levels in two-choice or two-alternative forced choice tasks are 75% accuracy, and a d′ of 1. 
By solving Equation 6 for s, one can calculate the threshold signal strength sth, and thus TvN curves. Using Mathematica® 10, it was found that  
\begin{equation}\tag{8a}s_{{\rm{th}}}^{{\rm{SR}}} = c + \sqrt 2 {\rm{er}}{{\rm{f}}^{ - 1}}(A)\;\sigma \end{equation}
 
\begin{equation}\tag{8b}A{\rm{\ }} = {{{\rm{erf}}\left[ {{{d^{\prime} } \over {\sqrt 2 }} - {\rm{er}}{{\rm{f}}^{ - 1}}\left((1 - 2\lambda )\;{\rm{erf}}({{c - n} \over {\sqrt 2 \sigma }})\right)} \right]} \over {1 - 2\lambda }},\end{equation}
where erf and erf–1 are the error function, and inverse error function, respectively. Note that we will use the superscript SR to refer to sth specifically in Equation 8a. When referring to the threshold signal strength more generally, no superscript it used. In the remainder of this paper, we will assume n = 0.  
When λ = 0, this equation reduces to a straight line  
\begin{equation}\tag{9}{s_{{\rm{th}}}} = d^{\prime} \sigma ,\end{equation}
independent of the criterion, and without a possibility to yield SR. Rewriting Equation 9 yields Display Formula\(d^{\prime} = {s_{{\rm{th}}}}/\sigma \), which is the signal-to-noise ratio (cf., Gong et al., 2002).  
When λ > 0, SR is possible. Equation 8a is plotted for various parameter combinations in Figure 2. An (arbitrary) base-line curve (solid blue) describes a characteristic threshold-versus-noise (TvN) function for when SR occurs: detection thresholds are lowest at intermediate noise levels σ. In the following sections, the influence of various factors on the shape of this curve, and the appearance of stochastic resonance, are investigated. Similar-looking curves are also observed in various masking paradigms (Solomon, 2009). These curves, generally, plot increment versus standard values (i.e., ΔI vs. I), and not threshold-versus-noise, and it is not currently known whether these two curves reflect a similar underlying mechanism, although some evidence suggests that they do (Goris et al., 2008). 
Figure 2
 
Dependence of detection threshold sth on noise σ. All parameters are as for the reference line (and as in Figure 1), except where otherwise specified. Reference: d′ = 1, c = 1, λ = 0.01; higher c: c = 2; higher σint: σint = 0.25; higher λ: λ = 0.07; equivalent noise: σint = 0.25, λ = 0, c = s/2. For the higher σint and equivalent noise curves, the x-axis represents σext.
Figure 2
 
Dependence of detection threshold sth on noise σ. All parameters are as for the reference line (and as in Figure 1), except where otherwise specified. Reference: d′ = 1, c = 1, λ = 0.01; higher c: c = 2; higher σint: σint = 0.25; higher λ: λ = 0.07; equivalent noise: σint = 0.25, λ = 0, c = s/2. For the higher σint and equivalent noise curves, the x-axis represents σext.
The influence of λ
First, the influence of lapse rate λ is investigated. With an increase in λ (green line; λ = 0.07) relative to the reference curve, SR decreases in magnitude, and the point of maximal SR moves to higher noise levels. The maximum level of λ before SR is lost can be calculated by setting Equation 8b to zero. Solving for λ in the limit of zero noise gives the maximum value of λ (i.e., λmax):  
\begin{equation}\tag{10}{\lambda _{{\rm{max}}}} = 1 - \Phi (d^{\prime} ).\end{equation}
 
Figure 3 shows the dependence of λmax on d′. At d′ = 1, a typical value of d′ in psychophysical experiments Display Formula\({\lambda _{{\rm{max}}}} \approx 0.1587\). This level of λmax is high compared to typical lapse rates around 0.06 or less (Wichmann & Hill, 2001), and suggests that λ is not a limiting factor on the occurrence of SR in many experiments. 
Figure 3
 
The maximum lapse rate, dependent on d′. The shaded area allows for SR.
Figure 3
 
The maximum lapse rate, dependent on d′. The shaded area allows for SR.
A final observation is that the effects of λ in Figure 2 remain present at large values of σ, while other factors, discussed later, do not have a major effect at large σ
The influence of criterion, and decision noise
As mentioned earlier, I will assume that observers strive to maximize expected reward/value, and that the payoff matrix is symmetric. I will also assume that there are equal numbers of target-present and target-absent trials. In this case, the decision criterion is set optimally when it is set between s and n, and SR cannot take place. However, c is often set suboptimally (Green & Swets, 1974). The level at which criterion c is set, has a strong influence on the vertical position of the TvN curve. With a higher c, the curve moves upward, and maximal SR moves to higher noise levels (Figure 2, red line). In a biological system, the setting of this decision boundary c is complex, and still not completely understood. The parameter c would sensibly be set at a level that is high enough to prevent many false positives, but low enough to prevent too many misses. Thus, where c is put determines how liberal or conservative the decision stage is. How could suboptimal settings of c arise? One possibility is that when the signal is weak, inherent noise in the decision stage could have a large influence on the setting of c, something that is generally ignored in signal detection theory. See, for example, Gravetter and Lockhead (1973) and Torgerson (1958) for models discussing the influence of criterion noise in classification tasks. For example, a “present” response could have the requirement that the decision signal is larger than the Pth percentile of the response distribution at the decision stage in the absence of input (i.e., purely noise-driven activity in the decision stage, here assumed to be normally distributed). Assuming an unbiased (mean = 0) but noisy response originating from the decision stage, the criterion should be set at  
\begin{equation}\tag{11}c = {\sigma _{\rm{d}}}{\Phi ^{ - 1}}(P).\end{equation}
 
In this case, c can be interpreted as reflecting noise in a post-sensory decision stage, which contrasts with the setting of c in signal detection theory (Green & Swets, 1974; Macmillan & Creelman, 2004). If σd increases, so does c, which results in an upward shift in the detection threshold sth, as can be observed in Figure 2. Conversely, because SR can only occur when c > s, systems with low decision noise (or alternatively, very liberal systems with a low P) are unlikely to show SR. The condition for SR is that  
\begin{equation}\tag{12}{\sigma _{\rm{d}}} \gt {s \over {{\Phi ^{ - 1}}(P)}}.\end{equation}
 
The influence of internal noise, and a comparison to the equivalent noise paradigm
In our model, the noise σ is the total amount of noise in the sensory-perceptual system. One can subdivide σ into two independent components: (a) noise that is external to the system σext, and, for example, is present in the stimulus; and (b) noise that is internally generated σint. Their variances add. In the equivalent noise paradigm, such as the linear amplifier model (Lu & Dosher, 2008), this is often rewritten as:  
\begin{equation}\tag{13}\sigma = \sqrt {\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} .\end{equation}
 
When inserting this into Equation 8a, one can show the effect of adding a constant amount of σint on detection thresholds (Figure 2, purple line). Increases in σint result in a leftward shift of the TvN curve relative to the reference curve. The increase in σint simultaneously results in lower thresholds (better performance) at low σext, while it increases thresholds at high σext. The former result is due to σint itself causing SR relative to the condition where σint = 0. In effect, it is moving the SR dip to the left. With even higher levels of internal noise, the curve is shifted so far to the left that the upward arm of the curve at low noise levels disappears. Even though σint still increases performance relative to the reference curve in this case, and thus shows SR, it will not be recognized as such because it does not present itself as a dip in the TvN curve. I come back to this in the discussion. Increasing σint even further will remove SR completely. 
A comparison to the equivalent noise paradigm
Instead of assuming a fixed criterion c, as in the derivations above, the equivalent noise paradigm assumes that the decision criterion is set optimally; that is, c = s/2. When this is inserted into Equation 6, and deriving the detection threshold dependent on performance level (d′) and lapse rate (λ), we obtain:  
\begin{equation}\tag{14}s_{{\rm{th}}}^{{\rm{EN}}} = 2\sqrt 2 \;{\rm{er}}{{\rm{f}}^{ - 1}}\left( {{{{\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 }})} \over {1 - 2\lambda }}} \right)\;\sqrt {\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} .\end{equation}
 
Setting λ = 0 (one of the assumptions often made) results in the standard dependence of the detection threshold on internal and external noise  
\begin{equation}\tag{15}s_{{\rm{th}}}^{{\rm{EN}}} = d^{\prime} \sqrt {\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} \end{equation}
(when d′ is set to 1; in fact, setting λ = 0 in Equation 8a will result in this same solution, independent on the setting of c). This dependence is plotted in Figure 2 (dashed curve). Due to the fact that c < s, there is no SR possible in this description of human performance.  
The relationship between λ and efficiency η
Because measured thresholds in psychophysical experiments are often higher than the optimal thresholds, the EN paradigm often includes an “efficiency” parameter η when fitting the curve to experimental data:  
\begin{equation}\tag{16}{s_{{\rm{th}}}} = {{d^{\prime} } \over \eta }\sqrt {\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} ,\end{equation}
where η < 0, which scales the curve up (i.e., higher detection thresholds).  
When looking at Equation 14, one can see that λ provides a scaling factor in our model, similar to η in equivalent noise paradigms. This relationship makes intuitive sense, as an increased number of lapses decreases efficiency:  
\begin{equation}\tag{17}s_{{\rm{th}}}^{{\rm{EN}}} = {\rm{AE}}\;\sqrt {\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} .\end{equation}
where AE is a constant, and related to the calculation efficiency (cf. Pelli, 1981), but here abbreviated as AE (attentional efficiency) to capture the fact that it is determined by attentional lapse rate λ:  
\begin{equation}\tag{18}{\rm{AE}} = 2\sqrt 2 \;{\rm{er}}{{\rm{f}}^{ - 1}}\left( {{{{\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 }})} \over {1 - 2\lambda }}} \right).\end{equation}
 
When equating AE to d′/η (see Equation 16) to derive the relationship between λ and η, one finds that  
\begin{equation}\tag{19}\eta = {{d^{\prime} } \over {2\sqrt 2 \;{\rm{er}}{{\rm{f}}^{ - 1}}\left( {{{{\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 }})} \over {1 - 2\lambda }}} \right)}},\end{equation}
and  
\begin{equation}\tag{20}\lambda = - {{{\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 }}) - {\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 \eta }})} \over {2\;{\rm{erf}}({{d^{\prime} } \over {2\sqrt 2 \eta }})}} = {{\Phi ({{d^{\prime} } \over 2}) - \Phi ({{d^{\prime} } \over {2\eta }})} \over {1 - 2\Phi ({{d^{\prime} } \over {2\eta }})}}.\end{equation}
 
This relationship is plotted in Figure 4 for different values of d′. These results show that lower efficiency can be captured by increased lapse rates. 
Figure 4
 
The relationship between efficiency parameter η in equivalent noise paradigms, and lapse rate λ.
Figure 4
 
The relationship between efficiency parameter η in equivalent noise paradigms, and lapse rate λ.
Overall, the comparison between our model and the equivalent noise paradigm (specifically, the linear amplifier model) suggests that η can be expressed in terms of λ, and thus that lapse rate λ may explain at least part of the suboptimal efficiency (i.e., η < 1) that is often reported in psychophysical experiments. 
The pooling parameter
In some psychophysical experiments, detection thresholds are actually lower than predicted. This would imply an efficiency >1. Generally, however, the results are interpreted differently, with total noise calculated as:  
\begin{equation}\tag{21}\sigma = \sqrt {{{\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} \over n}} ,\end{equation}
where n > 1 represents a pooling parameter, which replaces η. Equation 21, in essence, describes an important property of the central limit theorem, and n quantifies the number of samples that are taken to estimate a mean. For example, how many individual moving dots are combined to estimate a global pattern motion (e.g., Dakin, Mareschal, & Bex, 2005).  
Detection thresholds can be obtained from Equation 8a or Equation 9, by replacing σ withDisplay Formula\(\sqrt {(\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2)/n} \), while setting c = s/2, λ = 0:  
\begin{equation}\tag{22}{s_{{\rm{th}}}} = d^{\prime} \sqrt {{{\sigma _{{\rm{int}}}^2 + \sigma _{{\rm{ext}}}^2} \over n}} .\end{equation}
 
Equation 22 and Equation 16 are very closely related as they describe the same relationship when n is equal to η2
I set λ = 0, here, because otherwise both η (or λ) and n need to be determined, and as both have the same scaling effect, the system would be underdetermined. However, in experimental settings, unless participants are experienced and extremely motivated, it is unlikely that λ = 0, and thus a correct value of the pooling parameter n can only be obtained if λ is also determined independently. One way of doing that is to run trials without any external noise, and with a high stimulus strength (i.e., a very easy detection task). The proportion of incorrect trials in this condition reflects the lapse rate, as internal noise would not be large enough to influence this performance. 
Overestimation of internal noise in the EN paradigm
When comparing the EN results in Figure 2 to those of the high σint condition, one can observe that the suboptimal positioning of the criterion c in the latter case caused an expected increased detection threshold at low external noise values (left side of the plot). An important consequence is that the TvN curves are quite different for these two models even though the internal noise is the same. Consequently, fitting the equivalent noise function to experimental data with SR will give incorrect (over)estimates of internal noise. 
One can approximate the overestimation of internal noise by equating the EN model (Equation 14) and the SR model (Equation 8a) at σext = 0. Then, while taking the noise in the SR equation as the actual internal noise (σint) and that in the EN equation as the estimated internal noise (Display Formula\({\hat \sigma _{{\rm{int}}}}\)), we obtain:  
\begin{equation}\tag{23}{\hat \sigma _{{\rm{int}}}} = s_{{\rm{th}}}^{{\rm{SR}}}/{\rm{AE}},\end{equation}
which is plotted in Figure 5. Estimated internal noise is close to actual internal noise when the actual internal noise is large. However, when the actual internal noise is low, the estimated internal noise is overestimated. When λ is relatively low, the estimated internal noise is not even monotonically related to the actual internal noise. Therefore, the EN paradigm overestimates low internal noise. The precise amount of overestimation depends on various parameters (e.g., λ, c, d′), and on the distribution of external noise values tested in experimental settings, and how much the dip caused by SR influences the EN model fit.  
Figure 5
 
(a) The dependence of \({\hat \sigma _{{\rm{int}}}}\) on σint, when SR data is fit with the EN equation. Internal noise is overestimated, especially at small levels of σint.
Figure 5
 
(a) The dependence of \({\hat \sigma _{{\rm{int}}}}\) on σint, when SR data is fit with the EN equation. Internal noise is overestimated, especially at small levels of σint.
Stochastic resonance in terms of accuracy
I have focused so far on the dependence of SR on d′, because data in terms of accuracy have previously been reported on in a similar model without λ (Gong et al., 2002). A direct comparison between the current model and that of Gong et al. (2002) in terms of d′ and accuracy is made in Figure 6. As one can see, the largest difference is in terms of d′, and not in terms of accuracy. In fact, the level of above-chance accuracy in our model is 1 − 2λ times that of the model where λ = 0 (Gong et al., 2002). This fixed relationship means the analyses by Gong et al. (2002), which rested on finding the point where the derivative of accuracy relative to σ was 0, will remain valid in the current model. 
Figure 6
 
(a) The dependence of d′ on noise σ. The warm colors are for the current model, the cold colors are for those of Gong et al. (2002). The models are very different at low σ, but converge at high σ. (b) The dependence of accuracy [P(correct)] on σ. The current model is a scaled version of the model by Gong et al. (2002). Parameters were c = 1, λ = 0.04.
Figure 6
 
(a) The dependence of d′ on noise σ. The warm colors are for the current model, the cold colors are for those of Gong et al. (2002). The models are very different at low σ, but converge at high σ. (b) The dependence of accuracy [P(correct)] on σ. The current model is a scaled version of the model by Gong et al. (2002). Parameters were c = 1, λ = 0.04.
Fitting the model to experimental data
In order to show the added value of the current approach, we fitted the current SR model, and the equivalent noise model to two previously published datasets. The SR model was constructed by inserting Equation 13 into Equation 8a, thereby allowing the modeling of both internal and external noise. The EN model was taken as Equation 22. We fitted the models to the average data from Figure 4 in Lu, Chu, and Dosher (2006), which were extracted from the printed paper using the online tool, WebPlotDigitizer, v. 4.2. I picked the data for the 70.7% correct conditions, because the d′ in this case is approximately 1. The fits for the SR model and for the equivalent noise model are presented in blue and red, respectively, in Figure 7. These fits clearly show that the SR model captures the SR behavior at low noise levels better than the EN model. Comparing the fits, however, shows that at high noise levels, the EN model fits better. This is because the EN model allows for pooling, which will scale down the position of the curve, and will allow the model to fit the rising part of the data better. The SR model can be extended in a similar way (Figure 7, beige curve), which then fits the raising arm much better. I did not perform extensive model comparisons, but AIC measures indicated that, overall, the EN fitted the data better than the SR model, but the SR′ fitted the data on average equally well on a session-by-session basis. 
Figure 7
 
Model fits to data from (a) Lu et al. (2006), and (b) Vilidaite and Baker (2017). In (a) the different panels show the average data for different sessions (Sessions 1 and 2, Sessions 3 and 4, Sessions 5 and 6, Sessions 7 and 8, Sessions 9 and 10). The data were fit with the SR model, the EN model, and a modified version of the SR model (SR′), which included a pooling parameter, and had the lapse rate λ fixed at a value of 0.04. In (b) the different panels show data for S1 to S5, with the same three models were fit as in (a), except that here for the SR′ model, lapse rate was taken from an other experiment (double-pass data), and was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent.
Figure 7
 
Model fits to data from (a) Lu et al. (2006), and (b) Vilidaite and Baker (2017). In (a) the different panels show the average data for different sessions (Sessions 1 and 2, Sessions 3 and 4, Sessions 5 and 6, Sessions 7 and 8, Sessions 9 and 10). The data were fit with the SR model, the EN model, and a modified version of the SR model (SR′), which included a pooling parameter, and had the lapse rate λ fixed at a value of 0.04. In (b) the different panels show data for S1 to S5, with the same three models were fit as in (a), except that here for the SR′ model, lapse rate was taken from an other experiment (double-pass data), and was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent.
I also fitted the data presented by Vilidaite and Baker (2017). Because the raw data were available (https://dx.doi.org/10.6084/m9.figshare.3824250), I calculated detection thresholds at d′ of 1 for all participants and conditions. These were then fitted with the same three models (EN, SR, and SR′), with the only difference being that lapse rate values for the SR′ model were estimated from the independent double pass experiment. Lapse rate was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent. This approach to estimate lapse rate presents a way to reduce the number of free parameters in the model. Overall, the same conclusions apply to this second dataset, in that EN fitted the data better than the SR model, but the SR′ fitted the data on average equally well on a subject by subject basis. 
Compared to the EN model, the estimated internal noise levels in the SR and SR′ model are about an order of magnitude smaller (10 vs. 1%) in the Lu et al. (2006) data set, and a factor of 2 in the Vilidaite and Baker (2017) dataset. The levels of noise between the EN and SR models do not appear to correlate within each dataset, although this could be because there are only a limited number of observations. The estimated levels of lapse rate are reasonable and range between 0.15% and 3.2%, and 0.01% and 0.07% in the two datasets for the SR model. We have not made further attempts to fit this data with more complex models, as the aim here was merely to show that the SR model presents a viable alternative to the EN model and thus is a potentially useful extension on the signal-detection framework. The model can be expanded, and potentially improved by including, for example, multiplicative noise, induced noise, or other factors (see, e.g., the perceptual-template model; Lu et al. [2006]). The fitting procedures used in this report are available on the Open Science Framework website (https://osf.io/q3x2k/). 
Discussion
I have presented a model that explains SR in a signal detection framework, adding the human characteristic of lapse rate. The inclusion of lapse rate allows for stochastic resonance to be calculated at various levels of the performance metric d′, in addition to accuracy. The model produces experimentally observed threshold-versus-noise functions, when plotting threshold signal strength versus noise (TvN). I compared the current model of SR to the equivalent noise paradigm, and show that the lapse rate may be used to explain the suboptimal “efficiency” that is often found in experimental paradigms, as it scales the TvN curve up. I also argue that fitting data with the equivalent noise approach when SR is present can lead to overestimates of the level of internal noise. Indeed, our model fits showed that the SR model showed a 2-times to about 10-times reduction of internal noise estimates compared to the equivalent noise paradigm. 
In our model, the lapse rate has a strong influence on SR. With a zero lapse rate, the model reduces to the standard signal detection theory, and equivalent noise model. With non-zero lapse rates, SR can occur. With a typically employed performance levels of d′ = 1, the maximum lapse rate is about 0.16 before SR is lost. This is relatively high, and unlikely to be reached in most psychophysical experiments, suggesting that SR could be observed in many experiments. However, lapse rates increase when people perform dual-tasks (Buckley, Helton, Innes, Dalrymple-Alford, & Jones, 2016), and when they are tired (Anderson et al., 2012), and thus even with a relatively lenient performance level of d′ = 1, SR may not be observed. When more stringent performance levels are employed such as d′ = 2, or d′ = 3, the maximum lapse rate rapidly declines to ∼0.023, and ∼0.0013, which are at or below typically observed lapse rates (Wichmann & Hill, 2001), resulting in weak or absent SR. 
Overall, our modeling showed that including non-zero lapse rates (and suboptimal placement of criterion) in an extension of the signal detection theory leads to predictions that fit experimental data better when stochastic resonance occurs. Additionally, it fits data well that do not show signs of stochastic resonance (e.g., Figure 7b, panel 3), by assuming a zero lapse rate, or high internal noise, or a combination of the two. Whether a researcher decides to fit such data with an equivalent noise approach, or the current stochastic resonance model depends on theoretical considerations (whether one accepts the underlying assumptions of signal detection theory, and additions to it), and practical considerations (e.g., are there enough data to warrant adding additional degrees of freedom that the SR model has). I would argue here that, in general, it is at least important to consider the existence of stochastic resonance, and consider fitting the data with the current model to investigate whether the conclusions still hold when reasonable assumptions such as non-zero attentional lapses and suboptimal placements of criteria are accepted. 
This report also shows that the position of the decision criterion has a large influence on the signal detection thresholds. When the criterion is increased, detection thresholds also increase. I propose that the criterion level may depend on the noise in the decision stage, as well as on how liberal or conservative the decision stage is. SR is only possible in humans that have a noisy decision stage (large decision noise), or those that are quite conservative (i.e., need a strong signal before they say “present”). In this report, I have not considered the various ways in which a decision could be made more conservative/liberal, but these mechanisms include nonsymmetric payoff matrices, and unequal proportions of present and absent trials. 
The current framework also allows one to model influences of internal noise (σint). When internal noise is small it allows for SR, but when internal noise is large, SR will disappear. This effect potentially explains why SR is not observed more often than it is. Low amounts of σint cause SR in their own right, just as external noise does. In Figure 2, this can be observed as a decrease in thresholds at low external noise (on the left of the plot), compared to the reference curve. In psychophysical settings, however, this would not be interpreted as σint-induced SR (but see, Aihara, Kitajo, Nozaki, & Yamamoto, 2008), because in most experiments, internal noise is a fixed value that is not manipulated (and thus a reference curve is lacking). 
One could potentially design experiments to manipulate internal noise. For example, it has been suggested that inattention increases internal noise (Rahnev et al., 2011; Lu & Dosher, 1998), which could counterintuitively cause a decrease in detection threshold through SR in inattention conditions. Alternatively, one could use an individual differences approach to investigate whether internal noise causes SR. With this approach, one would predict that individuals with moderate levels of internal noise have lower (detection) thresholds than individuals with either low or high levels of internal noise (Aihara et al., 2008). In this context, it is interesting that SR has been proposed as a potential explanation for the increased perceptual functioning of people with autism spectrum disorder (ASD) on some perceptual tasks (Simmons et al., 2009). Specifically, it is argued that people with ASD have increased levels of internal noise (Dinstein et al., 2012; Milne, 2011; Simmons et al., 2009), and thus could outperform typically developing (TD) individuals, who have low internal noise, on some tasks due to SR. Incidentally, the TvN curves in Figure 2, can further explain the general finding that interindividual variation in task performance is larger in the ASD group than in the TD group, because with low noise, the TD group would fall on a relatively flat part of the curve, while with intermediate noise, the ASD group would fall in an area of the curve where small changes in internal or external noise can lead to large changes in threshold measurements. 
Overall, these speculations suggest that SR may have a more important role in human performance than often realized. Even in the literature, unidentified SR signatures are present in some data (e.g., Dakin et al., 2005; Lu et al., 2006; Mareschal, Bex, & Dakin, 2008), which were missed because the data were fitted with an equivalent noise approach (which does not allow for SR). Combined with the data that already showed SR (Collins et al., 1996; Goris et al., 2008; Moss et al., 2004; Simonotto et al., 1997; Ward et al., 2002), these data suggest that beneficial effects of noise (i.e., SR) may be more common than acknowledged, even in human performance. Our model may help determine which factors play important roles in determining when (and in whom) SR occurs. 
Acknowledgments
The author thanks Dr. Dror Cohen for suggestions on a previous version of the manuscript. 
Commercial relationships: none. 
Corresponding author: Jeroen J. A. van Boxtel. 
Address: School of Psychology, Faculty of Health, University of Canberra, Bruce, Australia. 
References
Aihara, T., Kitajo, K., Nozaki, D., & Yamamoto, Y. (2008). Internal noise determines external stochastic resonance in visual perception. Vision Research, 48 (14), 1569–1573.
Anderson, C., Sullivan, J. P., Flynn-Evans, E. E., Cade, B. E., Czeisler, C. A., & Lockley, S. W. (2012). Deterioration of neurobehavioral performance in resident physicians during repeated exposure to extended duration work shifts. Sleep, 35 (8), 1137–1146.
Buckley, R. J., Helton, W. S., Innes, C. R. H., Dalrymple-Alford, J. C., & Jones, R. D. (2016). Attention lapses and behavioural microsleeps during tracking, psychomotor vigilance, and dual tasks. Conscious Cognition, 45, 174–183.
Collins, J. J., Imhoff, T. T., & Grigg, P. (1996, October 31). Noise-enhanced tactile sensation. Nature, 383 (6603), 770.
Dakin, S. C., Mareschal, I., & Bex, P. J. (2005). Local and global limitations on direction integration assessed using equivalent noise analysis. Vision Research, 45 (24), 3027–3049.
Dinstein, I., Heeger, D. J., Lorenzi, L., Minshew, N. J., Malach, R., & Behrmann, M. (2012). Unreliable evoked responses in autism. Neuron, 75 (6), 981–991.
Gong, Y., Matthews, N., & Qian, N. (2002). Model for stochastic-resonance–type behavior in sensory perception. Physical Review. E, Statistical, Nonlinear, and Soft Matter Physics, 65 (3 Pt 1): 031904.
Goris, R. L. T., Wagemans, J., & Wichmann, F. A. (2008). Modelling contrast discrimination data suggest both the pedestal effect and stochastic resonance to be caused by the same mechanism. Journal of Vision, 8 (15): 17, 1–21, https://doi.org/10.1167/8.15.17. [PubMed] [Article]
Gravetter, F. & Lockhead, G. (1973). Criterial range as a frame of reference for stimulus judgment. Psychological Review, 80 (3), 203–216.
Green, D. & Swets, J. (1974). Signal detection theory and psychophysics (rev. ed.). Huntington, NY: R. F. Krieger.
Lu, Z.-L., Chu, W., & Dosher, B. A. (2006). Perceptual learning of motion direction discrimination in fovea: Separable mechanisms. Vision Research, 46 (15), 2315–2327.
Lu, Z. L., & Dosher, B. A. (1998). External noise distinguishes attention mechanisms. Vision Research, 38 (9), 1183–1198.
Lu, Z.-L., & Dosher, B. A. (2008). Characterizing observers using external noise and observer models: Assessing internal representations with external noise. Psychological Review, 115 (1), 44–82.
Macmillan, N. A., & Creelman, C. D. (2004). Detection theory: A user's guide. Location: Psychology Press.
Mareschal, I., Bex, P. J., & Dakin, S. C. (2008). Local motion processing limits fine direction discrimination in the periphery. Vision Research, 48 (16), 1719–1725.
McDonnell, M. D., & Abbott, D. (2009). What is stochastic resonance? Definitions, misconceptions, debates, and its relevance to biology. PLoS Computational Biology, 5 (5): e1000348.
McDonnell, M. D., & Ward, L. M. (2011). The benefits of noise in neural systems: Bridging theory and experiment. Nature Reviews Neuroscience, 12 (7), 415–425.
Milne, E. (2011). Increased intra-participant variability in children with autistic spectrum disorders: Evidence from single-trial analysis of evoked EEG. Frontiers in Psychology, 2: 51.
Moss, F., Ward, L. M., & Sannita, W. G. (2004). Stochastic resonance and sensory information processing: A tutorial and review of application. Clinical Neurophysiology, 115 (2), 267–281.
North, D. (1942). The absolute sensitivity of radio receivers. RCA Review, 6 (3), 332–343.
Pelli, D. (1981). Effects of visual noise (Unpublished doctoral thesis). Cambridge University.
Rahnev, D., & Denison, R. N. (2018). Suboptimality in perceptual decision making. Behavioral and Brain Sciences, 41, 1–66.
Rahnev, D., Maniscalco, B., Graves, T., Huang, E., de Lange, F. P., & Lau, H. (2011). Attention induces conservative subjective biases in visual perception. Nature Neuroscience, 14 (12), 1513–1515.
Shannon, C. E. (2001). A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5 (1), 3–55.
Simmons, D. R., Robertson, A. E., McKay, L. S., Toal, E., McAleer, P., & Pollick, F. E. (2009). Vision in autism spectrum disorders. Vision Research, 49 (22), 2705–2739.
Simonotto, E., Riani, M., Seife, C., Roberts, M., Twitty, J., & Moss, F. (1997). Visual perception of stochastic resonance. Physical Review Letters, 78 (6), 1186–1189.
Solomon, J. A. (2009). The history of dipper functions. Attention, Perception, & Psychophysics, 71 (3), 435–443.
Torgerson, W. S. (1958). Theory and methods of scaling. Oxford, UK: Wiley.
Vilidaite, G., & Baker, D. H. (2017). Individual differences in internal noise are consistent across two measurement techniques. Vision Research, 141, 30–39.
Von Neumann, J. (1956). Probabilistic logics and the synthesis of reliable organisms from unreliable components. Automata Studies, 34, 43–98.
Ward, L. M., & Kitajo, K. (2005). Attention excludes noise. Does it exclude stochastic resonance? AIP Conference Proceedings, 800, 245–252.
Ward, L. M., Neiman, A., & Moss, F. (2002). Stochastic resonance in psychophysics and in animal behavior. Biological Cybernetics, 87 (2), 91–101.
Wichmann, F. A., & Hill, N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63 (8), 1293–1313.
Figure 1
 
Dependence of d′ on noise (σ), and signal strength (s). Parameters are λ = 0.01, and c = 1, while the different lines are solutions at different values of s (indicated with the boxed numbers).
Figure 1
 
Dependence of d′ on noise (σ), and signal strength (s). Parameters are λ = 0.01, and c = 1, while the different lines are solutions at different values of s (indicated with the boxed numbers).
Figure 2
 
Dependence of detection threshold sth on noise σ. All parameters are as for the reference line (and as in Figure 1), except where otherwise specified. Reference: d′ = 1, c = 1, λ = 0.01; higher c: c = 2; higher σint: σint = 0.25; higher λ: λ = 0.07; equivalent noise: σint = 0.25, λ = 0, c = s/2. For the higher σint and equivalent noise curves, the x-axis represents σext.
Figure 2
 
Dependence of detection threshold sth on noise σ. All parameters are as for the reference line (and as in Figure 1), except where otherwise specified. Reference: d′ = 1, c = 1, λ = 0.01; higher c: c = 2; higher σint: σint = 0.25; higher λ: λ = 0.07; equivalent noise: σint = 0.25, λ = 0, c = s/2. For the higher σint and equivalent noise curves, the x-axis represents σext.
Figure 3
 
The maximum lapse rate, dependent on d′. The shaded area allows for SR.
Figure 3
 
The maximum lapse rate, dependent on d′. The shaded area allows for SR.
Figure 4
 
The relationship between efficiency parameter η in equivalent noise paradigms, and lapse rate λ.
Figure 4
 
The relationship between efficiency parameter η in equivalent noise paradigms, and lapse rate λ.
Figure 5
 
(a) The dependence of \({\hat \sigma _{{\rm{int}}}}\) on σint, when SR data is fit with the EN equation. Internal noise is overestimated, especially at small levels of σint.
Figure 5
 
(a) The dependence of \({\hat \sigma _{{\rm{int}}}}\) on σint, when SR data is fit with the EN equation. Internal noise is overestimated, especially at small levels of σint.
Figure 6
 
(a) The dependence of d′ on noise σ. The warm colors are for the current model, the cold colors are for those of Gong et al. (2002). The models are very different at low σ, but converge at high σ. (b) The dependence of accuracy [P(correct)] on σ. The current model is a scaled version of the model by Gong et al. (2002). Parameters were c = 1, λ = 0.04.
Figure 6
 
(a) The dependence of d′ on noise σ. The warm colors are for the current model, the cold colors are for those of Gong et al. (2002). The models are very different at low σ, but converge at high σ. (b) The dependence of accuracy [P(correct)] on σ. The current model is a scaled version of the model by Gong et al. (2002). Parameters were c = 1, λ = 0.04.
Figure 7
 
Model fits to data from (a) Lu et al. (2006), and (b) Vilidaite and Baker (2017). In (a) the different panels show the average data for different sessions (Sessions 1 and 2, Sessions 3 and 4, Sessions 5 and 6, Sessions 7 and 8, Sessions 9 and 10). The data were fit with the SR model, the EN model, and a modified version of the SR model (SR′), which included a pooling parameter, and had the lapse rate λ fixed at a value of 0.04. In (b) the different panels show data for S1 to S5, with the same three models were fit as in (a), except that here for the SR′ model, lapse rate was taken from an other experiment (double-pass data), and was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent.
Figure 7
 
Model fits to data from (a) Lu et al. (2006), and (b) Vilidaite and Baker (2017). In (a) the different panels show the average data for different sessions (Sessions 1 and 2, Sessions 3 and 4, Sessions 5 and 6, Sessions 7 and 8, Sessions 9 and 10). The data were fit with the SR model, the EN model, and a modified version of the SR model (SR′), which included a pooling parameter, and had the lapse rate λ fixed at a value of 0.04. In (b) the different panels show data for S1 to S5, with the same three models were fit as in (a), except that here for the SR′ model, lapse rate was taken from an other experiment (double-pass data), and was taken as the proportion of trials answered incorrectly when the stimulus was of maximum strength and external noise was absent.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×