Uncertainty models generate modulations within second-order kernels. We show this for the diagonal (variance) of the second-order kernels. i.e. diag(H
_{2}) ≠ 0, and for the simplest version of an uncertainty model where the response o to stimulus
i of dimensionality d is o = max(
i). Each value of
i is drawn from a standard normal distribution. We first consider the simplest linear case o = 〈
i〉 where 〈 〉 is mean across the vector, for which both o(
i _{[0]}) and o(
i _{[1]}) are Gaussian distributed. We approximate a 2AFC experiment as yes-no with unbiased criterion (threshold point) k (the validity of this approximation is confirmed by simulations, see Extra Figure 12). This truncates the distribution of o(
i _{[0]}) into two portions, one for correct rejections and one for false-alarms. The resulting distributions match those obtained from the truncation of o(
i _{[1]}), hits and misses respectively, except for a shift and sign-inversion of intensity values (Green & Swets,
1966).
n_{[0,1]} (noise fields on correct rejections) and
n_{[1,1]} (hits) therefore have equal expected pixelwise variances
σ_{[0,1]}^{2} and
σ_{[1,1]}^{2}, and so have
n_{[0,0]} (false alarms) and
n_{[1,0]} (misses). It follows that diag(ĥ
_{2}) =
σ_{[1,1]}^{2} +
σ_{[0,0]}^{2} −
σ_{[0,1]}^{2} −
σ_{[1,0]}^{2} = 0, offering a simplified demonstration of the result that a linear filter with unbiased criterion returns a featureless second-order kernel (see Neri,
2004 and Extra Figure 1). We now examine the distributions of
n_{[0,1]} and
n_{[0,0]} for the simplified uncertainty model. When d = 1 this model is equivalent to the linear model and diag(ĥ
_{2}) = 0. When d > 1, each pixel of
n_{[0,1]} is distributed according to a truncated (at k) normal distribution so
σ_{[0,1]}^{2} < 1. Each pixel of
n_{[0,0]} is distributed with expected mean
μ_{[0,0]} and expected variance
σ_{[0,0]}^{2} according to a mixture of two distributions: the distribution of the maximum of d normally distributed variables, which follows a Gumbel distribution (Kotz & Nadarajah,
2000) with mean
μ_{M} and variance
σ_{M}^{2}, and a softly truncated (where the truncation point t is not fixed but determined by the distribution of the maximum) normal distribution softT with mean
μ_{T} and variance
σ_{T}^{2}, for which we know that
σ_{[0,1]}^{2} <
σ_{T}^{2} < 1 (because t > k) and
μ_{T} < 0 (because t > 0). We can write
μ_{[0,0]} = p
μ_{T} + q
μ_{M} and
σ_{[0,0]}^{2} = p
σ_{T}^{2} + q
σ_{M}^{2} + p(
μ_{T} −
μ_{[0,0]})
^{2} + q(
μ_{M} −
μ_{[0,0]})
^{2} where p = (d − 1)/d and q = 1/d. We want to show that
σ_{[0,0]}^{2} >
σ_{[0,1]}^{2} which can be rewritten as
σ_{[0,0]}^{2} = p
σ_{T}^{2} + q
σ_{M}^{2} + p(
μ_{T} −
μ_{M})
^{2}(1 − q)q >
σ_{[0,1]}^{2}. We can replace
σ_{T}^{2} with
σ_{[0,1]}^{2} because
σ_{[0,1]}^{2} <
σ_{T}^{2}, as well as replace
μ_{T} with 0 because
μ_{T} < 0 and
μ_{M} > 0, so that the inequality is now
σ_{M}^{2} +
μ_{M}^{2}(1 − q) >
σ_{[0,1]}^{2}. Because
σ_{[0,1]}^{2} < 1 we can rewrite
σ_{M}^{2} +
μ_{M}^{2}(d − 1)/d > 1.
μ_{M}^{2} and
σ_{M}^{2} are expressible in closed form for d up to 5; for d = 3 (
μ_{M} = 3/(2
$\pi $
);
σ_{M}^{2} = (4
π − 9 + 2
$3$
)/(4
π)) and d = 4 (
μ_{M} = 3/(2
$\pi $
)⌊1 +
$2\pi $
sin
^{−1}(
$13$
)⌋;
σ_{M}^{2} = 1 +
$3\mu $
−
μ_{M}^{2}) the inequality is true so
σ_{[0,0]}^{2} >
σ_{[0,1]}^{2}. For d = 5 (
μ_{M} = 5/(4
$\pi $
)⌊1 +
$6\pi $
sin
^{−1}(
$13$
)⌋) the more stringent inequality
μ_{M}^{2}(d − 1)/d > 1 is true. Because
μ_{M} increases monotonically with d, it remains true for d > 5. We note that when d → ∞ we have
μ_{[0,0]} →
μ_{T} and
σ_{[0,0]}^{2} →
σ_{T}^{2}, i.e. the pixelwise distribution of
n_{[0,0]} approaches that of softT. For d → ∞ we also have that
μ_{M} and consequently k are increasingly large, so that both softT and
n_{[0,1]} approach the standard normal distribution, and we therefore expect
σ_{[0,0]}^{2} ≈
σ_{[0,1]}^{2}. We then have that
σ_{[0,0]}^{2} −
σ_{[0,1]}^{2} is a non-monotonic function of d whereby it equals 0 for both d = 1 and d → ∞ and is positive otherwise (although we have not directly demonstrated this for d = 2). A similar result can be demonstrated for
σ_{[1,1]}^{2} −
σ_{[1,0]}^{2} by restricting the analysis to pixels that do not contain the target, for which very similar logic applies. In conclusion, for pixels that do not contain the target we have
σ_{[0,0]}^{2} −
σ_{[0,1]}^{2} > 0 and
σ_{[1,1]}^{2} −
σ_{[1,0]}^{2} > 0 (when d > 2), so that diag(ĥ
_{2}) > 0. The case d = 2 is confirmed by simulations (see Extra Figure 12).