Abstract
Virtually any decision people make comes with a feeling of confidence about its correctness. What cortical computations might underlie this sense of confidence? Recent Bayesian theories propose that confidence is computed (in part) from the degree of uncertainty in sensory information. However, direct neural evidence for this hypothesis is currently lacking. Here, we test this hypothesis in human cortex using a combination of psychophysics, fMRI, and computational modeling. Participants viewed gratings of random orientation (0-179 degrees), while their brain activity was measured with fMRI. Critically, no physical stimulus noise was added to the stimuli, as this could then act as an external cue to confidence. After the grating disappeared from the screen, observers reported the orientation of the grating as well as their level of confidence in this perceptual judgment. The uncertainty associated with stimulus representations in human visual cortex (V1-V3) was quantified using a probabilistic decoding approach (van Bergen, Ma, Pratte & Jehee, 2015; van Bergen & Jehee, 2018). We used this decoded uncertainty to compare the human data to simulated data from both a Bayesian observer, as well as two alternative models implementing heuristic strategies to confidence. As predicted by the Bayesian model, we found that reported confidence tracks the degree of uncertainty contained in visual cortical activity. More specifically, when the cortical representation of a stimulus was more precise, observers reported higher levels of confidence. We moreover discovered that activity in the Insular, Anterior Cingulate, and Prefrontal Cortex reflected both this sensory uncertainty and reported confidence, in ways predicted by the Bayesian observer model. Altogether, this supports recent normative theories and suggests that probabilistic sensory information guides the computation of one’s sense of confidence.