We used Mathematica's implementation of
Brent's (2002) principal-axis method to find maxima (with 2 digits of accuracy) in the function mapping parameter values to log likelihood. The full signal-detection model has 12 free parameters: four
\((m_{\theta} , b_{\theta} , m_{\phi},\hbox{ and }b_{\phi})\) for the discriminant lines, plus one
\((\delta^{\prime})\) for the lapse rate, plus one
\((\rho)\) for the channel covariance, plus two
\(( p \hbox{ and }p^{\prime})\) for the power-function transducers, plus four channel gains
\((\alpha, \beta , \alpha^{\prime} ,\hbox{ and }\beta^{\prime})\). In addition to this full model, we fit a version constrained to exclude overlap between channel sensitivities (called “leakage” by
Raphael & Morgan, 2016; and
Morgan, 2017). Specifically, both channels were prohibited from responding to more than one dimension of modulation, i.e.
\(\beta = 0\hbox{ and }\alpha^{\prime} = 0\). This constraint significantly reduced the model's maximum likelihood
\([\chi^2(2) > 6, p < 0.05]\) only for JAS's data with the unscrambled chessboards (see
Figure 9). We also fit a version constrained to exclude any correlation between channel outputs (by forcing
\(\rho = 0 \)). This constraint did not significantly reduce the model's maximum likelihood for any of the data sets [in all cases,
\(\chi^2(1) < 2.2, p > 0.18\); see
Figure 9]. Finally, we fit a version constrained to exclude both overlap and correlation. This constraint did significantly reduce the model's maximum likelihood for each of the data sets [in all cases,
\(\chi^2(3) > 8, p < 0.05\); see
Figure 9].
Psychometric functions illustrating fits of the full model appear in
Figure 8. Perhaps the most salient feature of this figure is the downward trend of some amber curves, illustrating
P(Identification|∼Detection). Whereas High Threshold Theory predicts that this conditional probability should be independent of modulation depth; in the absence of attentional lapses and finger errors (i.e. when δ′ = 0), Signal Detection Theory predicts that this conditional probability should mirror
P(Identification|Detection), as modulation depth increases. Some of the amber curves have a kink on the right side, where the curve suddenly shoots back up toward a probability of 0.5. This is due to non-zero lapse rates, which are the only explanation for the failure to detect massively suprathreshold modulations.
A visual comparison of the amber curves with the amber points suggests little compelling evidence for
P(Identification|∼Detection) dropping to zero. With few exceptions, the amber symbols tend to congregate around 0.5, consistent with High Threshold Theory. However, we cannot form any firm conclusions in this regard. For each of the conditions summarised by one panel in
Figure 8, the adaptive staircases produced just 16 (out of a total 189) trials above threshold, on which MJM failed to detect the modulation. One fairly strong conclusion that can be drawn from these results is this: despite their potential value toward selecting between Signal Detection and High Threshold Theories, suprathreshold detection errors are too rare pursue with any vigor.
Perhaps surprisingly, the full signal-detection model has no trouble accounting for MJM's decline in
P(Identification|Detection) with increasingly large modulations of stimulus contrast in scrambled chessboards (as illustrated by the red curve in
Figure 8d)
3. Examine
Figure 7b to see how this arises. Notice that the “X” channel has non-zero gain to both blur modulations and contrast modulations. (Contrast signals “leak” into the channel that responds to blur modulations.) Consequently, pie charts are not confined to the vertical axis. Unlike the ellipse in
Figure 7a, which was centered on the origin, the ellipse in
Figure 7b is centered on the coordinates (−2.91, −0.93), which correspond to the expected channel outputs for a first-interval contrast modulation having a depth that is 10 dB greater than MJM's detection threshold. On trials such as these,
P(Identification|Detection) can be visualized as ratio between two areas: the intersection between the ellipse and the bottom quadrant and the intersection between the ellipse and the union of bottom and left quadrants. This ratio is 0.47.
P(Identification|∼Detection) varies with the ratio between two different areas: the intersection between the ellipse and the top quadrant and the intersection between the ellipse and the union of top and right quadrants. This ratio is 0.87.
Figure 9 summarizes how well the various models fit each set of data.