Abstract
In planning speeded reaching movements where each possible outcome is associated with a reward or penalty, the participant is in effect searching through a large collection of possible movement plans for the one plan that maximizes expected reward. It is plausible that, in searching among movements, observers take advantage of the highly ordered structure of the lotteries: small changes in movement plans in terms of space and time produce small changes in expected value. We developed a structured lottery task involving choice among eleven ordered lotteries, each lottery representing an explicit tradeoff of probability of reward and amount of possible reward, each successive lottery promising a higher probability of winning a smaller amount. In a first experiment we measured the performance of 120 participants each of whom participated in one of four conditions, each condition a different set of ordered lotteries. Each participant made exactly one choice, precluding any contribution of learning or “hill climbing”. We compared human decision performance to ideal performance maximizing expected gain. Overall, participants in different conditions altered their strategy appropriately and their expected winnings were 83% to 90% of the maximum possible. However, across all conditions, participants on average chose lotteries with higher probability of reward but lower potential reward than optimal. The failure was small in monetary terms but highly patterned. In a second experiment, we examined whether an additional 120 participants treated winning itself as an additive intrinsic reward that tipped the tradeoff toward increasing probability of winning at the expense of reward. We rejected this possibility. We conjecture that participants’ failure to maximize expected value is primarily due to a distortion in their use of probability information.
Acknowledgement: LTM: Guggenheim Fellowship, TMG: Intelligence Advanced Research Projects Activity (IARPA) via Department of the Interior (DOI) contract D10PC20023.