Abstract
Classical work on decision-making has demonstrated systematic deviations from normative behavior. Recent experiments on movement planning, however, have shown that subjects can be indistinguishable from optimality in their visuo-motor decisions (e.g., Trommershaeuser et al., Spatial Vision 2003). Unlike the classical case, the uncertainty in the visuo-motor case is internal to the subjects; it arises from their motor variability—variability in executing a motor action.
We asked whether observers can also combine the instrinsic variability in their perceptual representation with externally-specified reward-and-penalty structure to make optimal decisions. We extended a paradigm used to study the visual extrapolation of static contour geometry (Singh & Fulvio, PNAS 2005), to examine observers' decisions under risk in extrapolating curved motion trajectories.
Methods: Observers viewed a dot moving along a parabolic trajectory disappear behind the straight-edge of a half-disk occluder. Their task was to “catch” the dot from the opposite curved side, by adjusting the angular position of an arc or “mit” (length = 20 °). In the risky conditions, a double-mit was used, comprising a green reward region and a red penalty region, with partial overlap. Observers' gains/losses were determined by the part of the double-mit that “caught” the dot. Variables manipulated were: trajectory curvature (0.1185, 0,237 deg−1), penalty value (−200, −500), and mit overlap (−0.5, −0.25, +0.25, +0.5).
Results: Observers' performance in the baseline condition was used to estimate individual bias and variability. Based on these, predictions of optimal shift and optimal score were computed, using maximization of expected gain. We found that observed shifts were well predicted by optimal shifts. Moreover, observer efficiency (observed/optimal score) was high (80% – 114%). The results indicate that observers are implicitly aware of the instrinsic variability in their perceptual representation, and can combine it with externally-specified reward-and-penalty structure to make near-optimal decisions.