Purchase this article with an account.
Simon Barthelmé, Pascal Mamassian; Learning confidence in a visual task. Journal of Vision 2008;8(6):975. doi: 10.1167/8.6.975.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Life's most crucial decisions must often be made in the face of severe uncertainty. One ecologically relevant form of uncertainty comes from the intrinsic limits in the accuracy and reliability of biological visual systems. A moving shadow can be that of a cloud or that of a hawk: a wise hare will take its visual uncertainty into account when choosing to flee or stay. For the evaluation of visual uncertainty to be maximally useful, it must show both sensitivity and calibration. Variations in uncertainty must be detectable, and the subjective probabilities assigned to hypotheses must match relative frequencies in the environment. The traditional way of approaching this problem in psychophysics is by way of confidence ratings: sensitivity and calibration require that observer's confidence ratings match observed probability correct. However, it is hard to draw definite conclusions from confidence ratings, notably because of large interindividual differences. Here we introduce a task which limits the impact of observer differences in strategy. We induced uncertainty by having observers make orientation judgments in white noise. On every trial, observers had the option to either make a judgment on a stimulus, or to skip it in favour of an as-yet-unseen stimulus. If the observer chose to respond to the first, the trial was ended. If they chose to skip the first, another stimulus was presented, which they had to respond to. Maximisation of performance in this task requires both sensitivity and calibration, and no symbolic probability judgment is required. Results show that observers can make use of the “skip” response to improve performance. Furthermore, observers learned to calibrate their expected probability of success in this task in the absence of trial-by-trial feedback, indicating that the evaluation of uncertainty can benefit from unsupervised learning.
This PDF is available to Subscribers Only