Abstract
When we move our hands, both visual and proprioceptive input provides information about the motion. We show that subjects integrate these two modalities in a Bayesian optimal way in a two-part study. In the first part, we measured the reliabilities of subjects' estimates of movement direction using only proprioceptive or only visual motion information. In the proprioceptive condition, a robot arm moved a manipulandum held by subjects 15 cm back and forth along linear trajectories sampled uniformly from −25 to −35 and 55 to 65 degrees relative to the midsagittal plane. Subjects judged whether the motion direction presented in a second interval was clockwise to that of the first interval. In the visual condition, subjects made similar judgments of spatial-temporally correlated noise patterns moving with the same velocity profiles as the robot in the proprioceptive condition. We modulated the reliability of the visual information by using two different signal-to-noise ratios in the visual stimuli. In the second part, the robot moved subjects' hands behind a mirror while they viewed similar visual patterns (spatially co-aligned with the manipulandum held by subjects). The visual motion either equaled that of the robot or deviated in direction by ±10 degrees. Subjects adjusted a dial to indicate their perceived motion direction. Subjects finished two sessions of counter-balanced proprioceptive and visual discrimination trials and two sessions of the visual-prioprioceptive adjustment task. The reliability of each modality was computed from fitted psychometric functions from proprioceptive and vision-only discrimination sessions. Relative cue weights were estimated by regressing subjects' direction judgments in the last two sessions against the directions suggested by each cue. Subjects gave slightly more weight to vision in the high visual SNR condition but relied less on the visual cue in the low SNR condition as predicted by the threshold data.