Abstract
Perceptual decisions are made on the basis of complex and often ambiguous sensory evidence. Whilst the outcome of sensory evidence accumulation may reflect the mere crossing of a decision criterion, it is well known that human observers have access to a far richer representation of their sensory environment. Indeed, it is this rich representation that allows observers to assess the accuracy of their perceptual decisions, as in confidence judgements, and to offer a second choice judgement when deciding between more than two options. In this experiment, we examine human observers’ ability to use this graded, multidimensional representation of sensory evidence to make second-choice judgements, and how this second-choice process relates to their metacognitive ability. On each trial we presented observers with a series of oriented Gabors, drawn from one of three categories, defined by circular Gaussian distributions centred on −60°, 0° and 60° relative to vertical. Observers’ task was to decide which distribution the Gabors were drawn from, by accumulating the evidence for each category over the stimuli presented to them. After making their first choice, observers then provided a second choice – which is the next most likely category? Over pairs of trials, observers also chose which trial they were more confident that they made a correct first choice. A computation model was fit to each observer’s first choice decisions to determine their sensory internal noise. An ideal observer that was only limited by this sensory noise was then defined to predict performance in the second choice and confidence judgments. Observers underperformed in the second choice task, but overperformed in the metacognitive decision, relative to the ideal observer. This suggests that these two ways in which observers access the sensory evidence are dissociable, and thus, that these decisions target different aspects of the sensory evidence representation, utilizing distinct computations.
Acknowledgement: CNRS, INSERM