Uncertainty models generate modulations within second-order kernels. We show this for the diagonal (variance) of the second-order kernels. i.e. diag(H
2) ≠ 0, and for the simplest version of an uncertainty model where the response o to stimulus
i of dimensionality d is o = max(
i). Each value of
i is drawn from a standard normal distribution. We first consider the simplest linear case o = 〈
i〉 where 〈 〉 is mean across the vector, for which both o(
i [0]) and o(
i [1]) are Gaussian distributed. We approximate a 2AFC experiment as yes-no with unbiased criterion (threshold point) k (the validity of this approximation is confirmed by simulations, see Extra Figure 12). This truncates the distribution of o(
i [0]) into two portions, one for correct rejections and one for false-alarms. The resulting distributions match those obtained from the truncation of o(
i [1]), hits and misses respectively, except for a shift and sign-inversion of intensity values (Green & Swets,
1966).
n[0,1] (noise fields on correct rejections) and
n[1,1] (hits) therefore have equal expected pixelwise variances
σ[0,1]2 and
σ[1,1]2, and so have
n[0,0] (false alarms) and
n[1,0] (misses). It follows that diag(ĥ
2) =
σ[1,1]2 +
σ[0,0]2 −
σ[0,1]2 −
σ[1,0]2 = 0, offering a simplified demonstration of the result that a linear filter with unbiased criterion returns a featureless second-order kernel (see Neri,
2004 and Extra Figure 1). We now examine the distributions of
n[0,1] and
n[0,0] for the simplified uncertainty model. When d = 1 this model is equivalent to the linear model and diag(ĥ
2) = 0. When d > 1, each pixel of
n[0,1] is distributed according to a truncated (at k) normal distribution so
σ[0,1]2 < 1. Each pixel of
n[0,0] is distributed with expected mean
μ[0,0] and expected variance
σ[0,0]2 according to a mixture of two distributions: the distribution of the maximum of d normally distributed variables, which follows a Gumbel distribution (Kotz & Nadarajah,
2000) with mean
μM and variance
σM2, and a softly truncated (where the truncation point t is not fixed but determined by the distribution of the maximum) normal distribution softT with mean
μT and variance
σT2, for which we know that
σ[0,1]2 <
σT2 < 1 (because t > k) and
μT < 0 (because t > 0). We can write
μ[0,0] = p
μT + q
μM and
σ[0,0]2 = p
σT2 + q
σM2 + p(
μT −
μ[0,0])
2 + q(
μM −
μ[0,0])
2 where p = (d − 1)/d and q = 1/d. We want to show that
σ[0,0]2 >
σ[0,1]2 which can be rewritten as
σ[0,0]2 = p
σT2 + q
σM2 + p(
μT −
μM)
2(1 − q)q >
σ[0,1]2. We can replace
σT2 with
σ[0,1]2 because
σ[0,1]2 <
σT2, as well as replace
μT with 0 because
μT < 0 and
μM > 0, so that the inequality is now
σM2 +
μM2(1 − q) >
σ[0,1]2. Because
σ[0,1]2 < 1 we can rewrite
σM2 +
μM2(d − 1)/d > 1.
μM2 and
σM2 are expressible in closed form for d up to 5; for d = 3 (
μM = 3/(2
);
σM2 = (4
π − 9 + 2
)/(4
π)) and d = 4 (
μM = 3/(2
)⌊1 +
sin
−1(
)⌋;
σM2 = 1 +
−
μM2) the inequality is true so
σ[0,0]2 >
σ[0,1]2. For d = 5 (
μM = 5/(4
)⌊1 +
sin
−1(
)⌋) the more stringent inequality
μM2(d − 1)/d > 1 is true. Because
μM increases monotonically with d, it remains true for d > 5. We note that when d → ∞ we have
μ[0,0] →
μT and
σ[0,0]2 →
σT2, i.e. the pixelwise distribution of
n[0,0] approaches that of softT. For d → ∞ we also have that
μM and consequently k are increasingly large, so that both softT and
n[0,1] approach the standard normal distribution, and we therefore expect
σ[0,0]2 ≈
σ[0,1]2. We then have that
σ[0,0]2 −
σ[0,1]2 is a non-monotonic function of d whereby it equals 0 for both d = 1 and d → ∞ and is positive otherwise (although we have not directly demonstrated this for d = 2). A similar result can be demonstrated for
σ[1,1]2 −
σ[1,0]2 by restricting the analysis to pixels that do not contain the target, for which very similar logic applies. In conclusion, for pixels that do not contain the target we have
σ[0,0]2 −
σ[0,1]2 > 0 and
σ[1,1]2 −
σ[1,0]2 > 0 (when d > 2), so that diag(ĥ
2) > 0. The case d = 2 is confirmed by simulations (see Extra Figure 12).