Abstract
Signal-detection theory (SDT) is one of the most popular frameworks for analyzing data from studies of human behavior – including investigations of confidence. SDT-based analyses of confidence deliver both standard estimates of sensitivity (d’), and a second estimate that is based only on high-confidence decisions – meta d’. The extent to which meta d’ estimates fall short of d’ estimates is regarded as a measure of metacognitive inefficiency, quantifying the contamination of confidence by additional noise. These analyses rely on a key but questionable assumption – that repeated exposures to an input will evoke a normally-shaped distribution of perceptual experiences (the normality assumption). We have shown, via analyses inspired by an experiment and modelling, that when distributions of experiences do not conform to the normality assumption (due to skew or excess kurtosis), meta d’ can be systematically underestimated relative to d' – even though both statistics have been informed by a common set of data subject to a common source of noise. We show that this contingency results from trying to extrapolate the shape of an entire distribution of experiences from the subset of trials that have resulted in high-confidence target categorizations. These would reside at the right-side tail of a distribution of experiences, where the effects of any difference from a normal shape are disproportionate relative to the entirety of the distribution. Our data highlight that SDT-based analyses of confidence do not provide a ground truth measure of human metacognitive inefficiency.