We present experimental and computational evidence for the estimation of visual and proprioceptive directional information during forward, visually driven arm movements. We presented noisy directional proprioceptive and visual stimuli simultaneously and in isolation midway during a pointing movement. Directional proprioceptive stimuli were created by brief force pulses, which varied in direction and were applied to the fingertip shortly after movement onset. Subjects indicated the perceived direction of the stimulus after each trial. We measured unimodal performance in trials in which we presented only the visual or only the proprioceptive stimulus. When we presented simultaneous but conflicting bimodal information, subjects' perceived direction fell in between the visual and proprioceptive directions. We find that the judged mean orientation matched the MLE predictions but did not show the expected improvement in reliability as compared to unimodal performance. We present an alternative model (probabilistic cue switching, PCS), which is consistent with our data. According to this model, subjects base their bimodal judgments on only one of two directional cues in a given trial, with relative choice probabilities proportional to the average stimulus reliability. These results suggest that subjects based their decision on a probability mixture of both modalities without integrating information across modalities.

*most reliable*estimate of the state of the world, i.e., the estimate for which the variance of the combination of both cues is as low as possible. In the case of independent, Gaussian-distributed sensory cues, this combination rule corresponds to a weighted linear combination of the individual cues according to their relative reliabilities, i.e., cues with higher reliability contribute more to the combined percept.

^{3}(approximately full arm movement pivoting at the shoulder). The haptic workspace of the PHANToM was spatially aligned with a 3-dimensional visual scene. The visual scene was created by viewing through stereoscopic shutter glasses (CrystalEyes 3 liquid-crystal shutter glasses, 120 Hz) onto a mirror that reflected the images on the computer screen. The shutter glasses were fixed relative to the screen location, and in addition, we used a chin rest to constrain head movements during the experiment. White noise presented via headphones masked possible sources of noise created by the mechanics of the PHANToM. The experiment was run on a PC (2.8 GHz; 1 GB RAM) using C++ code to control the apparatus, present the stimuli, and track the finger.

_{ P}and the visual estimate of the direction Ŝ

_{ V}. Both estimates are noisy and the amount of noise can be quantified by measuring the variance of answers toward repetitions of the same unimodal stimulus. We told our subjects that the visual cue represents the same direction as the proprioceptive cue and all subjects believed in this instruction until the end of the experiment. If both estimates are noisy estimates of the same direction, a less noisy combined estimate Ŝ

_{ MLE}can be computed by weighted averaging:

*R,*i.e., the relative inverse variance of the corresponding single cue estimates: and Here, we assume that the noise distributions are independent. Both mean and variance of the resulting combined estimate can then be calculated for a given mean and variance of the unimodal estimates. The mean

*μ*

_{MLE}of the combined estimate is always between either of the two individual estimates, namely

*σ*

_{MLE}

^{2}is always less than either of the two individual variances, namely

*μ*

_{ V}, and the judgment variability (as a measure of perceptual variability)

*σ*

_{ V}

^{2}in visual-only trials, and respectively,

*μ*

_{ P}and

*σ*

_{ P}

^{2}of judged directions in proprioceptive-only trials. The unimodal performance was computed for each subject individually for the proprioceptive stimulus and both visual reliabilities (V60 vs. V90), averaged across all stimulus directions. Based on these unimodal data estimates, we computed the expected bimodal bias

*μ*

_{ MLE}( Equation 3) relative to the presented proprioceptive stimulus direction and the expected variability

*σ*

_{ MLE}

^{2}( Equation 4) of perceived angles for each cue conflict (−30°, 0°, 30°) and for both conditions of visual reliability (V60 vs. V90), individually for each subject. We then compared the computed MLE predictions,

*μ*

_{ MLE}and

*σ*

_{ MLE}

^{2}for all subjects and conditions with the observed data in the corresponding bimodal trials,

*μ*

_{ VP}and

*σ*

_{ VP}

^{2}.

*w*

_{ i}in Equation 2 denote the choice probability

*p*(

*i*) for PCS. Thus, in every single trial, the subject perceives the stimulus either in the direction of the visual estimate with probability

*p*(

*V*) =

*w*

_{ V}and will have the visual bias

*μ*

_{ V}and variance

*σ*

_{ V}

^{2}; or the subject perceives the stimulus in the direction of the proprioceptive estimate with probability

*p*(

*P*) =

*w*

_{ P}= 1 −

*p*(

*V*) and will have the proprioceptive bias

*μ*

_{ P}and variance

*σ*

_{ P}

^{2}. We modeled PCS by computing unimodal estimates for each subject individually based on the data measured in the unimodal conditions. We sampled 100,000 times either from a Gaussian with

*μ*

_{ V}and

*σ*

_{ V}with probability

*p*(

*V*) =

*w*

_{ V}, or from a Gaussian with

*μ*

_{ P}and

*σ*

_{ P}with probability

*p*(

*P*) =

*w*

_{ P}. The mean and standard deviation of the resulting distribution is the prediction of the PCS model, i.e.,

*μ*

_{ PCS}and

*σ*

_{ PCS}. We then compared the computed PCS predictions,

*μ*

_{ PCS}and

*σ*

_{ PCS}for all subjects and conditions with the observed data in the corresponding bimodal trials,

*μ*

_{ VP}and

*σ*

_{ VP}.

*p*> 0.5). For any cue conflict situation of either 30° or −30°, the mean perceived angle differs from the force pulse direction for both visual noise levels (all

*p*s < 0.05). For V90, they also differ significantly from the visual stimulus direction in conflict trials (both

*p*< 0.01). For V60, they do not differ significantly from the visual stimulus direction but show a strong trend toward the force pulse direction (

*p*= 0.10 for −30° and

*p*= 0.12 for 30°). This suggests that subjects rely on both visual and proprioceptive information in bimodal trials. Furthermore, the V60 trials are closer to the visually perceived direction than the V90 trials. Thus, subjects seem to adjust their weights relative to the visual cue reliability.

*μ*

_{ MLE}) compared to the observed direction judgments in the corresponding bimodal condition

*μ*

_{ VP}. A linear regression of the MLE prediction fitted to the observed bimodal data yielded an

*R*squared correlation of

*R*

^{2}= 0.90. However, the estimated slope of the linear correlation between MLE prediction and observed data was 1.25 (confidence interval from 1.10 to 1.39), i.e., larger than 1, indicating that subjects shifted slightly more toward the visual cue than predicted by MLE.

*σ*

_{ VP}= 33°, unimodal variability estimate

*σ*

_{ V}= 30° and

*σ*

_{ P}= 42°; for V90: bimodal variability

*σ*

_{ VP}= 56°, unimodal variability estimates,

*σ*

_{ V}= 60° and

*σ*

_{ P}= 42°; see also Figure 2b). A linear regression of the MLE prediction fitted to the observed reliability data yielded an

*R*squared correlation of

*R*

^{2}= 0.72. However, the estimated slope of the linear correlation between MLE prediction and observed data was 1.56 (95% confidence interval from 1.23 to 1.92), i.e., larger than 1, indicating that subjects' variability was higher than predicted by MLE. Taken together, the results for mean and variability of judgments in the bimodal conditions indicate that subjects seem to combine visual and proprioceptive information somehow and adjust the weights relative to stimulus reliability. However, they do not show the expected increase in reliability in bimodal as compared to unimodal conditions as predicted by the MLE model.

*μ*

_{ PCS}=

*μ*

_{ MLE}). However, if every single trial is a weighted average, the expected deviation of a single trial from the mean direction is lower, i.e., averaged over all trials, the predicted variability is higher for PCS than for MLE (

*σ*

_{ PCS}>

*σ*

_{ MLE}). More precise, whereas the reliability for the MLE estimate is

*better*than or

*at least*as reliable as the most reliable unimodal estimate, the combined estimate for PCS is expected to be

*worse than*or

*at most*as reliable as the most reliable unimodal estimate.

*σ*

_{ PCS}in the bimodal condition to the observed bimodal variabilities

*σ*

_{ VP}in Figure 6c. In contrast to the variability predictions made by the MLE model ( Figure 6b), our fit of the PCS model cannot be rejected in the bimodal conditions. A linear regression of the PCS prediction fitted to the observed reliability data yielded an

*R*squared correlation of

*R*

^{2}= 0.72. However, the estimated slope of the linear correlation between MLE prediction and observed data was 1.14 (95% confidence interval from 0.89 to 1.39), i.e., it includes 1. Though there is still unexplained variance in the data, the PCS predictions fit to the data clearly better than the MLE predictions.

Subject | Force pulse direction | Pooled across directions | ||||||||
---|---|---|---|---|---|---|---|---|---|---|

0° | 45° | 90° | 135° | 180° | 225° | 270° | 315° | |||

CF | μ | 13.7 | 10.8 | 12.4 | 31.5 | 15.8 | 3.5 | −1.9 | 33.1 | 14.4 |

σ | 25.3 | 38.7 | 40.5 | 30.3 | 24.9 | 60.0 | 58.4 | 50.9 | 43.5 | |

PS | μ | 4.1 | 6.7 | 10.2 | 4.5 | −6.1 | −24.1 | −17.6 | −16.8 | −5.6 |

σ | 14.2 | 24.6 | 12.9 | 18.7 | 10.9 | 13.1 | 13.9 | 32.2 | 22.6 | |

BM | μ | −9.4 | −4.2 | −1.9 | 4.3 | 6.4 | −21 | −6.7 | −13.3 | −5.9 |

σ | 47 | 48.9 | 54.3 | 64.2 | 28.3 | 35.3 | 48.7 | 64.8 | 50.4 | |

JR | μ | 0.8 | 22.3 | 22.4 | 33.8 | −2.4 | −16.3 | −25.5 | −6.3 | 2.3 |

σ | 52.7 | 76.2 | 47.3 | 36.1 | 40.4 | 43 | 31.4 | 61.3 | 54.1 | |

JF | μ | 7.8 | 3.7 | 5.3 | 34.5 | −0.1 | −19.4 | 2.1 | 30 | 7.4 |

σ | 37 | 42.8 | 26.1 | 20.6 | 17.8 | 15.2 | 41.4 | 36.8 | 37.3 | |

VK | μ | −2.9 | −10.9 | 2.4 | 8.9 | −14.4 | −26 | 13.7 | 11.7 | −0.8 |

σ | 30.3 | 43.1 | 43.1 | 21.1 | 28.4 | 67.7 | 54.7 | 46 | 46.4 |

*R*squared of

*R*

^{2}= 0.6. The estimated slope of the linear correlation between MLE prediction and observed data across subjects and conditions was 1.07 (95% confidence interval from 0.97 to 1.17), i.e., overall the MLE prediction neither over- nor underestimated the shift in perceived mean direction. The comparison of predicted and observed variabilities shows that the variability in the bimodal conditions does not exhibit the expected decrease as predicted by MLE ( Figure 7b,

*R*

^{2}= 0.49, slope 1.31, 95% confidence interval from 1.16 to 1.47) but closely matches the PCS predictions ( Figure 7c,

*R*

^{2}= 0.49, slope 0.93, 95% confidence interval from 0.81 to 1.04), though again the amount of unexplained variance or noise is large and similar for both models.

*Nature Neuroscience*, 7, 1057–1058. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (10):7, 921–929, http://journalofvision.org/4/10/7/, doi:10.1167/4.10.7. [PubMed] [Article] [CrossRef]

*Current Biology*, 14, 257–262. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 41, 449–461. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 43, 2603–2613. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 47, 114–125. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 7, (5):7, 1–14, http://journalofvision.org/7/5/7/, doi:10.1167/7.5.7. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 415, 429–433. [PubMed] [CrossRef] [PubMed]

*Trends in Cognitive Sciences*, 8, 162–169. [PubMed] [CrossRef] [PubMed]

*PLoS ONE*, 2,

*Current Biology*, 13, 483–488. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 5, (11):7, 1013–1023, http://journalofvision.org/5/11/7/, doi:10.1167/5.11.7. [PubMed] [Article] [CrossRef]

*Experimental Brain Research*, 179, 595–606. [PubMed] [CrossRef] [PubMed]

*Science*, 298, 1627–1630. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (12):1, 967–992, http://journalofvision.org/4/12/1/, doi:10.1167/4.12.1. [PubMed] [Article] [CrossRef] [PubMed]

*Topics in circular statistics*. Singapore: World Scientific.

*Trends in Neurosciences*, 27, 712–719. [PubMed] [CrossRef] [PubMed]

*Perception as Bayesian inference*. Cambridge, UK: Cambridge University Press.

*Vision Research*, 43, 2539–2558. [PubMed] [CrossRef] [PubMed]

*PLoS ONE*, 2,

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 18, 2307–2320. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 389–412. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 43, 2451–2468. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 22, 801–809. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 7, (6):3, 1–21, http://journalofvision.org/7/6/3/, doi:10.1167/7.6.3. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 93, 3200–3213. [PubMed] [Article] [CrossRef] [PubMed]

*Philosophical Transactions of the Royal Society of London B: Biological Sciences*, 357, 1137–1145. [PubMed] [Article] [CrossRef]

*Journal of Vision*, 8, (3):2, 1–15, http://journalofvision.org/8/3/2, doi:10.1167/8.3.2. [PubMed] [Article] [CrossRef] [PubMed]

*Trends in Cognitive Sciences*, 10, 301–308. [PubMed] [CrossRef] [PubMed]