Abstract
Humans evaluate the certainty of their perceptual estimations. Such self-monitoring enables us to detect errors in perceptual estimates even without external feedback and maintain a sense of confidence that corresponds to objective performance. Human perceptual estimations tend to commit two types of errors. The inherent noisiness of sensory signals induces random variability in estimations, and the interaction between prior knowledge and sensory noise leads to systematic biases. However, it remains unclear whether humans can monitor these two distinct kinds of error—bias and variability—in their own estimates and adjusts levels of confidence accordingly. Here, we examined how behavioral bias and variability are reflected in the self-evaluation of perceptual estimations. Subjects estimated the location of hidden targets from a briefly presented (75 ms) dot-cloud (𝜎 = 2.25, 4.5 or 9°) centered on the target location and reported levels of confidence on their estimations. As expected, subjects’ estimation reports were not only variable across trials but also biased toward the mean of the prior distribution. Crucially, subjects’ confidence reports remained constant regardless of the target location, which is in stark contrast to the fact that the estimation performance was worse when the target location was farther from the prior mean. In other words, subjects could have substantially improved metacognitive accuracy if they had considered biases in their estimates, but they did not. We observed consistent results when subjects reported their levels of confidence through subjective rating or post-decision wagering, with or without trial-by-trial feedback, respectively. Lastly, we varied the prior uncertainty of the target location and confirmed that the observed results are not because subjects formed a sense of confidence merely based on the visual appearance of the stimuli. Our findings indicate that the metacognitive evaluations of perceptual estimations reflect the behavioral variability, but not biases.