**Many studies have investigated how multiple stimuli combine to reach threshold. There are broadly speaking two ways this can occur: additive summation (AS) where inputs from the different stimuli add together in a single mechanism, or probability summation (PS) where different stimuli are detected independently by separate mechanisms. PS is traditionally modeled under high threshold theory (HTT); however, tests have shown that HTT is incorrect and that signal detection theory (SDT) is the better framework for modeling summation. Modeling the equivalent of PS under SDT is, however, relatively complicated, leading many investigators to use Monte Carlo simulations for the predictions. We derive formulas that employ numerical integration to predict the proportion correct for detecting multiple stimuli assuming PS under SDT, for the situations in which stimuli are either equal or unequal in strength. Both formulas are general purpose, calculating performance for forced-choice tasks with M alternatives, n stimuli, in Q monitored mechanisms, each subject to a non-linear transducer with exponent τ. We show how the probability (and additive) summation formulas can be used to simulate psychometric functions, which when fitted with Weibull functions make signature predictions for how thresholds and psychometric function slopes vary as a function of τ, n, and Q. We also show how one can fit the formulas directly to real psychometric functions using data from a binocular summation experiment, and show how one can obtain estimates of τ and test whether binocular summation conforms more to PS or AS. The methods described here can be readily applied using software functions newly added to the Palamedes toolbox.**

*does*affect thresholds, at least for contrast grating detection (Meese & Summers, 2012), unfortunately for HTT.

*Q*is the total number of monitored channels/mechanisms on each trial;

*n*is the number of those mechanisms that are activated by the target stimuli; and

*M*is the number of intervals that are presented on each trial (e.g., an

*M*of 2 gives a two-interval forced-choice task like that shown in Figure 2).

*n*,

*Q*, and

*M*and are often daunting to the non-mathematician. Moreover, the predictions made by the existing theoretical papers on PS under SDT are not always presented as sufficiently different to those formulated under HTT-PS for authors to reconsider their choice of model. One aim of this paper is to reiterate what some investigators have already pointed out (e.g., Meese & Summers, 2012; Tyler & Chen, 2000), namely that there are significant differences between the two PS models both in terms of threshold predictions and, as importantly, predictions for the slopes of psychometric functions. An additional limitation of previous theoretical expositions is that they do not always incorporate a term for a non-linear transducer function, whereas many studies point to an accelerating transducer at threshold (Heeger, 1991; Legge & Foley, 1980; Meese & Summers, 2009, 2012; Tanner & Swets, 1954). Here we use

*τ*as the exponent on stimulus intensity to embody a non-linear transducer, with

*τ*> 1 for an acclerating transducer.

*have*used an SDT–PS model tend to use Monte Carlo simulations rather than an analytic solution (e.g., Meese & Summers, 2012). Although the Monte Carlo method is simple to implement and will converge to an accurate prediction given sufficient processing time, it is markedly less efficient than an equation, even if the equation requires numerical integration for its solution. Execution speed is not an issue when a single calculation is required, but for the modeling in this communication, which involves many thousands of calculations in order to fit multiple psychometric functions and determine bootstrap errors of the fitted parameters and model goodness-of-fits, Monte Carlo simulations are prohibitively slow. An equation solution also offers a clearer insight into the workings of a system, as its mathematical properties are stated explicitly rather than emerging from simulated behavior.

*n*,

*τ*,

*Q*, and

*M*. To our knowledge these PS equations have not been provided before. We show how the equations can be used to simulate psychometric functions in order to see how the fitted threshold and slope parameters vary with the form of summation and the four above parameters. We discuss how the threshold and slope parameters differ between PS and AS and differ between SDT and HTT. Finally, we show how the equations can be used to fit psychometric function data from an actual experiment. For this we have conducted a binocular summation experiment, and have used the summation equations to estimate parameters such as the transducer exponent

*τ*and to test whether AS or PS better accounts for the data.

*d*′, which represents the distance between the Gaussian-distributed internal noise (denoted by

*N*) and signal + noise (denoted by

*S*) distributions in standard deviation, or

*z*units. This is illustrated in Figure 3, along with some of the other parameters referred to in this section.

*d*′ is related to stimulus strength by: where

*s*is the strength or amplitude of the stimulus (e.g., its contrast),

*g*is a scaling factor that converts stimulus space into

*d*′ space and incorporates the reciprocal of the internal noise standard deviation, and

*τ*is the exponent of the internal transducer. Under the assumptions of additive internal noise, AS can then be expressed by two equations. The first deals with stimulus components that are of equal strength: where

*n*is the number of stimulus components and

*Q*the number of monitored channels. As we noted earlier,

*Q*can refer either to channels with different stimulus selectivities (e.g., for different orientations or spatial frequencies) or to different possible stimulus locations. The √

*Q*relationship embodies the fact that when adding noise, one must add their variances, not standard deviations. If

*σ*is the standard deviation of the internal noise for each monitored channel, the resulting

*σ*of

*Q*noise distributions is √(

*Qσ*

^{2}) or

*σ*√

*Q*; in other words,

*σ*increases with

*Q*by a factor of √

*Q*. Since

*σ*is unity (

*d*′ is expressed in units of standard deviation), the expression simplifies to √

*Q*. The relationship between

*n*and

*Q*determines whether one is dealing with a Matched or Fixed Attention Window scenario: if

*Q = n*, it is Matched, if

*Q > n*, it is Fixed.

*n*given the value of the exponent

*τ*.

*Pc*as a function of stimulus strength

*s*. In a forced-choice task, the optimal decision rule under the assumptions of SDT, and the rule generally assumed to be employed by observers, is the MAX (maximum) rule schematized in Figure 1, which states that the observer chooses as the target the alternative or interval that produces the biggest signal. For a single stimulus, the standard SDT formula for converting

*Pc*to

*d*′ based on the MAX decision rule is: (Green & Swets 1966; Kingdom & Prins, 2010; Wickens, 2002), where

*t*is the strength of a sample signal,

*ϕ*(

*t − d*′) the height of the signal distribution at

*t*and Φ(

*t*) the area under the noise distribution to the left of

*t*, as shown in Figure 3.

*M*is the number of alternatives in the forced-choice task. A detailed exposition of this formula is provided in Kingdom and Prins (2010). To obtain equations for the psychometric function for detecting multiple stimuli under AS, we substitute Equations 2 and 3 for

*d*′ into Equation 4. This gives equations for

*Pc*as a function of

*s*, given parameters

*g, τ*,

*M*,

*Q*, and

*n*. If we denote

*AS*for the equal and

_{SDT}*AS*for the unequal component stimuli situations, the two psychometric function equations can be denoted respectively by: where in Equation 6,

_{SDT}uneq*s*

_{1},

*s*

_{2}…

*s*are the set of different stimuli,

_{n}*g*

_{1},

*g*

_{2}…

*g*their associated scaling factors, and

_{n}*τ*

_{1},

*τ*

_{2}…

*τ*their associated transducer exponents. Both of the above functions can be fitted to a plot of

_{n}*Pc*against

*s*, with

*M*,

*Q*, and

*n*as fixed parameters, and

*g*and

*τ*as free parameters to be estimated. Examples of this usage will be given later.

*s*from

*Pc*, and the resulting the function can be denoted by:

*t*is sample stimulus strength;

*ϕ*(

*t*) and

*ϕ*(

*t*−

*d*′) are the heights of the noise and signal distributions at

*t*; and Φ(

*t*) and Φ(

*t*−

*d*′) are the areas under the noise and signal distributions to the left of

*t*. For the unequal component signal strength situation the equation is:

*Pc*to

*s*, require in addition an iterative search procedure.

*n*,

*Q*, and

*τ,*for both PS and AS. Since summation studies typically fit their data with a Weibull psychometric function (e.g., Graham, 1989), for the sake of consistency we have also fitted the simulated data with the Weibull. The Weibull function is defined as: where

*γ*is the guessing rate (typically 1/

*M*),

*α*the threshold at the 0.816 proportion correct level, and

*β*the slope. We also demonstrate the usage of the equations and associated Palamedes routines for fitting psychophysical data from an actual summation experiment and show how to determine whether PS or AS gives the better account of the data.

^{2}. To facilitate fixation and fusion, the stimuli were surrounded by a circular black ring 1-px wide and 4.8° in diameter. Stimuli were presented for 150 ms with a raised cosine temporal envelope.

*L*), right eye (

*R*), and to both eyes (

*Bin*); (b) +45° to

*L*,

*R*, and

*Bin*; and (c) −45° to

*L*, +45° to

*R*, −45° and +45° to

*Bin*. The method of constant stimuli was employed. For each subcondition seven logarithmically spaced contrasts (Michelson) were selected based on pilot studies in order to span the range 50%–100% correct. In two of the Bin conditions additional contrasts were employed to obtain a good span of the psychometric function.

*L*,

*R*, and

*Bin*conditions and contrasts were randomly interleaved, such that the observer did not know on each trial whether the stimulus was to the left eye, right eye, or both eyes. Total trials per session was 210 (3 subconditions × 7 contrasts × 10 repeats of each subcondition/contrast).

*L*,

*R*, and

*Bin*) were fitted with both PS and AS models using the PAL_SDT_Summ_MultiplePFML_Fit routines in Palamedes. The routines estimate four parameters:

*g*,

_{L}*g*,

_{R}*τ*, and

_{L}*τ*, which are respectively the scaling factors (

_{R}*g*) and transducer exponents (

*τ*) for the left and right eyes. Standard errors on the fitted parameters were obtained by bootstrap analysis with 200 simulations using PAL_SDT_Summ_MultiplePFML_BootstrapParametric, and goodness-of-fits for each model measured using the likelihood-ratio test implemented in PAL_SDT_Summ_MultiplePFML_GoodnessOfFit, each with 200 simulations (see Kingdom & Prins, 2010). A separate analysis was conducted in which the exponent on the transducer

*τ*was constrained to be equal in both eyes, and for this purpose customized versions of the above routines were employed.

*Pc*against

*s*were generated using the AS and PS equations given above, each fitted with a Weibull function in order to obtain threshold

*α*and slope

*β*. For a standard 2AFC (

*M*= 2) task, we consider how

*α*and

*β*vary with

*n*,

*Q*, and

*τ*.

*s*values, using Equation 10 for PS and Equation 5 for AS. All eight psychometric functions have input parameters

*M*= 2 (and hence a guessing rate

*γ*= 0.5 for the Weibull),

*Q*= 4 and

*τ*= 2 (i.e., a square-law transducer). The variable input parameter is

*n:*1, 2, 3, and 4. By holding

*Q*constant at 4, one simulates the Fixed Attention Window scenario, which in practice would necessitate interleaving the component stimuli such that the observer would be unable to match the attentional window to only the target stimuli mechanisms. The

*s*values on the abscissae have been spaced logarithmically in order to reveal any differences in the slopes of the psychometric functions as a function of

*n*.

*n*increases, there is a reduction in threshold

*α*for both PS and AS simulations. The slopes

*β*decline with

*n*for PS, but not for AS. The decline in slope with

*n*for PS is due to a decrease in uncertainty (Pelli, 1985): as

*n*increases, a greater proportion of the

*Q*monitored mechanisms become task-relevant, so fewer task-irrelevant mechanisms contribute only noise to performance. The decline in

*β*with

*n*is a signature property of PS under SDT for the Fixed Attention Window scenario. With AS under SDT,

*α*varies with

*n*,

*Q*, and

*p*in a straightforward manner according to the formulas in the right-hand column of Table 1. Under AS,

*β*is invariant to both

*n*and

*Q*, but approximately proportional to

*τ*.

*α*and

*β*varies with

*n*and

*Q*under PS, again for a square-law transducer (

*τ*= 2). The slopes of the

*α-*versus-

*n*and

*β*-versus-

*n*plots given for each value of

*Q*on the graphs have been calculated from the straight-line fits to each log–log plot. The

*α*-versus-

*n*slopes on the left range from −0.3 for

*Q*= 2 to −0.21 for

*Q*= 64. The

*β*-versus-

*n*slopes (i.e., the “slope of the slope”—how the slope of the psychometric function changes with increasing

*n*) on the right vary only very slightly around an average of about −0.21. If the different component stimuli are blocked and the observer is assumed to monitor only the signals from relevant mechanisms (

*n*=

*Q*), which we have termed the Matched Attention Window scenario, the predictions are the dashed lines in Figure 5. Related figures to Figure 5 can be found in Meese and Summers (2012).

*τ*on the

*α-*versus-

*n*and

*β*-versus-

*n*slopes is shown in Figure 6. The

*α*-versus-

*n*slopes are approximately inversely proportional to

*τ*, ranging between about −0.6 to −0.15 for a four-fold increase in

*τ*from unity. The

*β*-versus-

*n*slopes vary little as a function of

*τ*(or

*Q*for that matter), being clustered around −0.21.

*β*with summation (Mayer & Tyler, 1986; Nachmias, 1981). The HTT–PS model uses

*β*to predict how

*α*should change with summation. Under the commonly used Minkowski approximation (accurate for systems with fewer than 10

^{4}mechanisms), the slope of the function relating log

*α*to log

*n*is −1/

*β*(Quick, 1974; Robson & Graham, 1981). The AS models in Table 1 also predict no change in

*β*with summation. For these models, however, the exponent

*τ*provides sufficient flexibility that any

*α*-versus-

*n*slope can be achieved (the exponent also determines

*β*).

*L*,

*R*, and

*Bin*, were simultaneously fit with either the PS or AS model. The fixed parameters in both models were

*M*(number forced-choice alternatives), which was set to 2,

*Q*(number of monitored channels) set to 2 (two eyes), and

*n*(number of stimuli) set to 1 for the

*L*and

*R*psychometric functions, and 2 for the

*Bin*psychometric function. Note that setting

*Q*to 2 for all conditions follows from the fact that the

*L*,

*R*, and

*Bin*conditions (for each orientation combination) were interleaved not blocked, thus conforming to the Fixed Attention Window scenario. The fitted parameters were

*g*(stimulus gain) and

*τ*(transducer exponent) for each eye, resulting in four estimates:

*g*,

_{L}*g*,

_{R}*τ*, and

_{L}*τ*. The data and model fits are shown in Figure 7, and Table 2 shows the parameter estimates together with bootstrap errors. The

_{R}*p*-values in the plots are goodness-of-fit values calculated using the likelihood-ratio test of goodness-of-fit (Kingdom & Prins, 2010). As can be seen, many of the models can be rejected using the

*p*< 0.05 criterion. It should be noted, however, that most models are likely be rejected by this criterion with sufficient number of trials, since no model is perfect (Burnham & Anderson, 2002; Prins, personal communication, January 12, 2014). An alternative to the

*p*-value for comparing the models is Akaike's Information Criterion (AIC; Akaike, 1974), and the AS–PS AIC differences are given in Table 2. A negative AIC difference implies that the AS model is better, a positive AIC difference that the PS model is better.

*τ*constrained to be the same in both eyes, a reasonable assumption given that the physiology of the two eyes' pathways is presumably very similar (we are grateful to Tim Meese for suggesting this model variant).

*τ*in Table 3 average to 2.5 for G. S. and 1.8 for F. K. and are close to the square-law transducer for contrast transduction found in previous studies (Heeger, 1991; Legge & Foley, 1980; Meese et al., 2006; Meese & Summers, 2009, 2012; Stromeyer & Klein, 1974).

*SR*s), and Minkowski summation (

*m*) measures obtained from fitting each psychometric function separately with a Weibull. The Minkowski expression for summation is: where

*S*is sensitivity to the

_{i}*i*th stimulus component,

*S*sensitivity to the combination stimulus,

_{cmb}*n*the number of stimuli, and

*m*the Minkowski exponent that expresses the inverse of the degree of summation (note that

*m*is not the same as

*M*, the number of alternatives/intervals in the forced-choice task). If we replace sensitivity in Equation 14 with the reciprocal of Weibull threshold

*α*, Minkowski

*m*for the binocular summation experiment can be expressed in the form: where

*α*,

_{Bin}*α*, and

_{L}*α*are the

_{R}*Bin*,

*L*, and

*R*thresholds, respectively. Using iterative search one can find the value of

*m*that satisfies this equation. The

*SR*is the ratio of monocular to binocular thresholds, and expresses directly how much better two eyes are compared to one. Rather than average the (log) values of

*SR*obtained from the left- and right-eye monocular/binocular threshold ratios, we can calculate a single

*SR*from

*m*(we are grateful to Tim Meese for suggesting this method) using the relation:

*SR*s for

*n*= 2, as well as Minkowski

*m*values, are given in Table 4. These values are in keeping with those reported in the aforementioned binocular summation studies, though it is worth noting that higher binocular

*SR*s have been observed for some types of crossoriented stimuli, for example low spatial frequency luminance (Meese & Baker, 2011) and chromatic (Gheiratmand, Meese, & Mullen, 2013) gratings.

*. 19 (6), 716–723.*

*IEEE Transactions on Automatic Control**. 14 (1): 30, 1–21, http://www.journalofvision.org/content/14/1/30, doi:10.1167/14.1.30. [PubMed] [Article]*

*Journal of Vision**. 48 (21), 2336–2344.*

*Vision Research**. 30 (3), 266–276.*

*Perception & Psychophysics**Models versus full reality.*. New York: Springer-Verlaag.

*Model selection and multimodel inference*(2nd ed.)*. 10 (3): 20, 1–15, http://www.journalofvision.org/content/10/3/20, doi:10.1167/10.3.20. [PubMed] [Article]*

*Journal of Vision**. 12 (12): 16, 1–17, http://www.journalofvision.org/content/12/12/16, doi:10.1167/12.12.16. [PubMed] [Article]*

*Journal of Vision**. 13 (1): 2, 1–13, http://www.journalofvision.org/content/13/1/2, doi:10.1167/13.1.2. [PubMed] [Article]*

*Journal of Vision**. Oxford, UK: Oxford University.*

*Visual pattern analyzers**. 11 (3), 251–259.*

*Vision Research**. 27, 1997–2007.*

*Vision Research**. New York: John Wiley & Sons.*

*Signal detection theory and psychophysics**(pp. 119–133). Cambridge, MA: MIT Press.*

*Computational models of visual processing**. 14 (6), 365–368.*

*Vision Research**. London: Academic Press.*

*Psychophysics: A practical introduction**. 30, 300–315.*

*Journal of the Optical Society of America A**. 70, 1458–1471.*

*Journal of the Optical Society of America A**. 43 (5), 519–530.*

*Vision Research**. 3 (8), 1166–1172.*

*Journal of the Optical Society of America A**. 10 (8): 14, 1–21, http://www.journalofvision.org/content/10/8/14, doi:10.1167/10.8.14. [PubMed] [Article]*

*Journal of Vision**. 2, 159–182.*

*i-Perception**. 6 (11): 7, 1224–1243, http://www.journalofvision.org/content/6/11/7, doi:10.1167/6.11.7. [PubMed] [Article]*

*Journal of Vision**. 9 (4): 7, 1–16, http://www.journalofvision.org/content/9/4/7, doi:10.1167/9.4.7. [PubMed] [Article]*

*Journal of Vision**. 12 (11): 9, 1–28, http://www.journalofvision.org/content/12/11/9, doi:10.1167/12.11.9. [PubMed] [Article]*

*Journal of Vision**. 40, 2101–2113.*

*Vision Research**. 21, 215–223.*

*Vision Research**, 2, 1508–1532.*

*J. Opt. Soc. Amer. A**. 152, 698–699.*

*Nature**. Retrieved from http://www.palamedestoolbox.org.*

*Palamedes: Matlab routines for analyzing psychophysical data**. 16 (2), 65–67.*

*Kybernetik**. 68 (1978), 116–121.*

*Journal of the Optical Society of America A**. 21 (3), 409–418.*

*Vision Research**. 33, 2773–2788.*

*Vision Research**. 61 (9), 1176–1186.*

*Journal of the Optical Society of America A**. 62, 44–56.*

*Vision Research**. 40 (5), 2091–2100.*

*Journal of Experimental Psychology: Human Perception & Performance**. 20, 2197–2215.*

*Journal of Optical Society of America A**. 14, 1409–1420.*

*Vision Research**. 13 (14): 12, 1–16, http://www.journalofvision.org/content/13/14/12, doi:10.1167/13.14.12. [PubMed] [Article]*

*Journal of Vision**. 61 (6), 401–409.*

*Psychological Review**. 278 (1710), 1365–1372.*

*Proceedings of the Royal Society B: Biological Sciences**. 40, 3121–3144.*

*Vision Research**. Oxford, UK: Oxford University Press.*

*Elementary signal detection theory**z*values and probabilities. Figure A1 (top) shows a standard normal probability distribution in which the abscissa is given in units of standard deviation, or

*z*units. The ordinate in the graph is probability density, denoted by

*ϕ*. Probability density values are relative likelihoods, specifically derivatives or rates of change of probabilities. In order to convert intervals between

*z*units into probabilities, one has to integrate the values under the curve between

*z*values. If one integrates the curve between 0 and some value of

*z*, the result is Φ, termed the cumulative normal. Because the total area under the standard normal distribution is by definition unity, the cumulative normal distribution ranges from 0–1. The cumulative normal gives the probability that a random variable from a standardized normal distribution is less than or equal to

*z*.

*d′*

*N*shows the distribution of noisy internal responses to a single blank interval where no target stimulus is present. The one labeled

*S*shows the distribution of noisy internal responses to the interval containing the target. Representing the sensory magnitudes of

*N*and

*S*as probability distributions means that on any trial, the actual sensory magnitudes will be random samples from those distributions. The relative probabilities of particular samples are given by the heights of the distributions at the sample points.

*d′*as follows: where

*M*is the number of alternatives/intervals from which the target has to be chosen (Green & Swets, 1966; Wickens, 2002). An exposition of the derivation of this equation can be found in Kingdom and Prins (2010).

*S*

_{1}and

*S*

_{2}. Since we are dealing with PS, we assume that the two stimuli are detected by independent mechanisms, and that both mechanisms are monitored to maximize the chance of detecting the target. The number of monitored mechanisms is symbolized by

*Q*, and the number of those that contain signal by

*n*. The observer monitors only the relevant mechanisms, so

*n*=

*Q*.

*M*-AFC task: Select the interval/alternative with the biggest signal (i.e., the MAX rule). However, because the observer is monitoring two mechanisms, a correct decision will be made if either

*S*

_{1}or

*S*

_{2}produces the biggest signal. In order to calculate the expected

*Pc*for this situation, we must first calculate the probability that

*S*

_{1}will produce the biggest signal, second that

*S*

_{2}will produce the biggest signal, and then add the two probabilities together. Note that for either one of the two stimuli to produce the biggest signal, a sample from it must be bigger than

*both*noise signals from the null interval

*and*the signal from the other stimulus in the target interval.

*t*from

*S*

_{1}will be greater than a noise signal is the probability that the noise signal will be less than

*t*, which from Figure 3 is Φ(

*t*). Thus the probability that

*t*from

*S*

_{1}will be greater than both noise signals is Φ(

*t*) × Φ(

*t*) = Φ(

*t*)

^{2}. By the same argument, the probability that a sample

*t*from

*S*

_{1}will be greater than the signal from the other stimulus

*S*

_{2}is Φ(

*t*−

*d′*). To obtain the probability that the sample

*t*from

*S*

_{1}will be greater than both noise signals and the other stimulus signal, we multiply these two probabilities together: Φ(

*t*)

^{2}Φ(

*t*−

*d′*). And to obtain the probability

*P*that a random sample

*t*from

*S*

_{1}will produce the biggest signal, we integrate this product across all values of

*t*, taking into account the relative probability of obtaining

*t*, which is given by its height in the stimulus distribution: ϕ(

*t*−

*d′*). The result is:

*M*and

*n*. In general the number of noise signals in the non-target interval(s) is

*n*(

*M*− 1), and the number of other stimulus signals in the target interval

*n*− 1. Incorporating these values into Equation A5, we obtain an equation that gives proportion correct

*Pc*:

*n*=

*Q*. Now consider the Fixed Attention Window scenario in Figure 2, in which some of the monitored mechanisms in the target interval contain only internal noise (

*n*<

*Q*). In this case it is possible that the irrelevant noise-alone mechanisms in the target interval might produce the biggest signal, resulting in a correct decision under the MAX rule. There are

*Q*−

*n*noise signals in the target interval and

*QM*-

*n*-1 other noise signals with which each target noise signal must be compared. And there are

*n*stimulus signals with which each noise signal in the target interval must be compared. If we follow the same logic that led us to Equation A6, the result is the second part of the equation below, which has been added to Equation A6 (with the exponent

*n*(

*M*−1) in Equation A6 changed to

*QM*−

*n*):

*n*=

*Q*. We designate Equation A7 as the general equation for computing PS under SDT for both Matched and Fixed Attention Window scenarios, when all

*n*stimuli produce the same

*d′*. This equation is invertable, because only a single

*d′*value is involved; however, there is no simple solution to the inversion so it has to be implemented by an iterative search procedure.

*S*

_{1}and

*S*

_{2}have different

*d′*values: Call these

*d′*

_{1}and

*d′*

_{2}. Take first again the Matched Attention Window scenario. Following the same argument as above, we begin with the probability that a sample

*t*from

*S*

_{1}will be bigger than the two noise signals in the null interval. The probability that sample

*t*will be bigger than the one other stimulus signal is Φ(

*t*−

*d′*

_{2}). Integrating across all

*t*samples of

*S*

_{1}and then adding in the corresponding integral for the probability that

*S*

_{2}will provide the biggest signal, we obtain.

*M*-AFC task this extends to: which is equation B10 in Shimozaki et al. (2003). Extending the same logic to the three stimulus case, with

*d′*s

*d′*

_{1},

*d′*

_{2}and

*d′*

_{3}we obtain:

*d′*values that must be compared to each of the others under consideration. However, if we replace the right hand part of each integral with the terms containing different

*d′*values by the product notation, and then use the sum notation to add together the different integrals, we can generalize Equation A10 to

*n*signals to obtain:

*Q*-monitored mechanisms we apply the same logic as was applied to Equations A5 and A6. The result is:

*Pc*for

*n*independently detected stimuli with internal stimulus strengths

*d′*

_{1},

*d′*

_{2}

_{,}

*d′*

_{3}…

*d′*, for an

_{n}*M*-AFC task with

*Q*monitored mechanisms, according to the MAX decision rule under the assumptions of SDT.