In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants were exposed to visual, inertial, or visual–inertial motion conditions in a moving base simulator, capable of accelerating along a horizontal linear track with variable heading. Visual random-dot motion stimuli were projected on a display with a 40° horizontal × 32° vertical field of view (FoV). All motion profiles consisted of a raised cosine bell in velocity. Stimulus heading was varied between 0 and 20°. After each stimulus, participants indicated whether perceived self-motion was straight-ahead or not. We fitted cumulative normal distribution functions to the data as a psychometric model and compared this model to a nested model in which the slope of the multisensory condition was subject to the MLI hypothesis. Based on likelihood ratio tests, the MLI model had to be rejected. It seems that the imprecise inertial estimate was weighed relatively more than the precise visual estimate, compared to the MLI predictions. Possibly, this can be attributed to low realism of the visual stimulus. The present results concur with other findings of overweighing of inertial cues in synthetic environments.

^{2}. Each motion profile lasted 9.3 s (see additional material: HeadingI.wmv). Since the vestibular system is responsive to acceleration and not to constant velocity, a constantly changing velocity profile was used to ensure vestibular reactivity.

Heading angle α | ||||||||
---|---|---|---|---|---|---|---|---|

0° | 1° | 3° | 5° | 7° | 10° | 15° | 20° | |

V | x | x | x | x | x | x | ||

I | x | x | x | x | x | x | ||

C | x | x | x | x | x | x |

*X** of their heading angle

*α,*with normal distributed noise,

*X** ∼

*N*(

*α, σ*), where the standard deviation

*σ*reflects the size of the noise in the random variables. We expect that participants experience to move “not straight ahead” and respond accordingly (i.e., binary response;

*X*= 1) when a certain internal threshold is exceeded:

*X** >

*τ*(Long, 1997, Section 3.2). When the heading angle increases, the probability of responding “not straight ahead” also increases.

*σ*

_{c}) depend on the standard deviation parameters for the unisensory inertial (

*σ*

_{i}) and visual (

*σ*

_{v}) conditions. The value of an environmental property, such as an assessment of heading, can be represented by an “internal” random variable

*X**. When we have two assessments,

*X*

_{i}* and

*X*

_{v}*, of a single environmental property, as is the case with multiple senses in the combined condition, the value of that property can be estimated by a convex (i.e., coefficients sum to one) combination of the unisensory representations

*w*s are weights. Assuming unbiased unisensory noisy representations of the true heading angle

*α,*the linear combination is also an unbiased noisy representation of

*α*. Since a linear combination of normal variates is itself normal distributed, the noise in the combined estimate

*X*

_{c}* is also normal distributed. Assuming that the random noises are stochastically independent, the variance

*σ*

_{c}

^{2}of

*X*

_{c}* is

*w*

_{i}and

*w*

_{v}). Since we assumed normal distributed noise, the likelihood

*L*

_{ j}for the internal representation

*X*

_{ j}* for the sensory condition

*j*is

*X*

_{i}*,

*X*

_{v}*) is given by the product of the likelihoods of the unisensory variables

*X*

_{ j}* because we assume that the noises are independent across senses. Treating the

*σ*

_{ j}as knowns, the maximum of this function yields the ML estimate of the true angle

*α*in terms of these parameters. It can be derived mathematically that the values of the expression where the maximum is attained indeed takes the linear form ( Equation 1) where

*σ*

_{c}

^{2}of

*X*

_{c}* corresponding to these weights is

*w*

_{i}and

*w*

_{v}are chosen so that the variance

*σ*

_{c}

^{2}is minimal across all convex combinations.

*X*

_{ jα}on the condition

*j*= (i, v, c) and angle

*α,*and their interaction. Here i, v, and c represent the inertial, visual, and combined condition, respectively. More specifically,

*τ*

_{ j},

*σ*

_{ j}) of the psychometric curve for condition

*j*by maximum likelihood estimation (MLE, not to be confused with MLI), assuming that all observations of a participant were stochastically independent since no feedback on performance was provided. We found considerable inter-subject differences in psychometric curve parameters. Since the number of participants was too small to warrant a random effect specification, and we had large numbers of observations per participant, we fitted the model with six parameters (three

*τ*s and three

*σ*s) separately for each participant. A Pearson's

*χ*

^{2}showed satisfactory goodness of fit of the psychometric curves, so that we finally fitted the model ( Equation 6) with the MLI-induced constraint ( Equation 5). This restricted model has five parameters (three

*τ*s and two

*σ*s), as the slope in the combined condition was predicted by MLI of the parameters of unisensory conditions. Since the

*τ*parameter is free to vary for each condition, a comparison of the unrestricted and restricted models is only affected by the slopes (

*σ*) of the fitted functions. A comparison of the fit of the models with and without this constraint, using likelihood ratio tests with one degree of freedom, allowed us to test whether our data support the MLI hypothesis.

*σ*of the underlying Gaussians for the unisensory and multisensory conditions are often derived from the slope of fitted cumulative Gaussians:

Participant | Unrestricted model | MLI-constrained model | Likelihood ratio test for MLI | |||
---|---|---|---|---|---|---|

Log likelihood | Goodness of fit | Log likelihood | LR | p | ||

Pearson χ ^{2} | p | |||||

1 | −85.69 | 10.84 | 0.542 | −86.02 | 0.65 | 0.419 |

2 | −69.87 | 21.77 | 0.040 | −74.89 | 10.03 | 0.002 |

3 | −69.31 | 44.31 | 0.000 | −75.14 | 11.67 | 0.001 |

4 | −88.61 | 17.11 | 0.146 | −93.04 | 8.85 | 0.003 |

5 | −82.62 | 7.68 | 0.810 | −86.05 | 6.87 | 0.009 |

6 | −67.18 | 12.97 | 0.371 | −79.74 | 25.13 | 0.000 |

7 | −51.75 | 12.80 | 0.384 | −71.71 | 39.92 | 0.000 |

8 | −75.76 | 13.57 | 0.330 | −84.24 | 16.97 | 0.000 |

9 | −71.53 | 9.38 | 0.671 | −89.58 | 36.10 | 0.000 |

*χ*

^{2}test for goodness of fit with 180 observations equally distributed over 18 heading angle × condition combinations. In general, the model fits the data well, and hence we may proceed below to test whether the 6 parameters of this model satisfy the MLI constraint. The violation for participant number three is caused by a poor fit of the model in the combined condition. The results reported below do not differ substantially if we include or exclude participant three.

*τ, σ*) between participants (

*χ*

^{2}(48) = 168.92,

*p*< 0.001). Differences between the curves were also assessed for each participant individually using Wald

*χ*

^{2}tests ( Table 3).

*SD*= 0.68), 7.20° (

*SD*= 3.94), and 3.56° (

*SD*= 1.04), respectively. The model likelihoods and likelihood ratios, as well as a measure of the model goodness of fit are given in Table 2. Associated observed and predicted standard deviations are presented in Figure 3.

*χ*

^{2}(9) = 156.19,

*p*< 0.001). For all but one participant, we have strong evidence that their heading perceptions conflict with MLI. To allow comparison with previous studies on heading perception, we also calculated the group's average 75% detection thresholds for the three conditions, using the estimated model parameters and an inverse CDF function. These values amounted to 4.6° (

*SD*= 1.3), 16.1° (

*SD*= 5.2), and 9.4° (

*SD*= 1.94) for the visual, inertial, and combined visual–inertial conditions, respectively. Note that these thresholds indicate the shift of a subjective judgment from “straight ahead” to “not straight ahead.” These thresholds should be seen as a “Point of Subjective Straight Heading.”

*Perception*, 28, 299–306. [Article] [CrossRef] [PubMed]

*Neuroscience Letters*, 380, 155–160. [Article] [CrossRef] [PubMed]

*Handbook of sensory physiology. Vestibular system. Psychophysics, applied aspects and general interpretations*(pp. 155–190). Berlin, Germany/New York: Springer-Verlag.

*Microgravity Science and Technology*, 21, 281–286. [CrossRef]

*Biological Cybernetics*, 86, 191–207. [CrossRef] [PubMed]

*Journal of Vision*, 6, (5):2, 554–664, http://www.journalofvision.org/content/6/5/2, doi:10.1167/6.5.2. [PubMed] [Article] [CrossRef]

*Perception & Psychophysics*, 53, 325–337. [CrossRef] [PubMed]

*Nature*, 451, 429–433. [CrossRef]

*Journal of Neuroscience*, 29, 15601–15612. [Article] [CrossRef] [PubMed]

*Perception of the visual world*. Boston: Houghton Mifflin.

*Journal of Vestibular Research*, 14, 375–385. [PubMed]

*Journal of Aircraft*, 38, 600–606. [CrossRef]

*Nature Neuroscience*, 11, 1201–1210. [Article] [CrossRef] [PubMed]

*Nature Neuroscience*, 10, 1038–1047. [Article] [CrossRef] [PubMed]

*Handbook of sensory physiology*(vol. 6, pp. 3–154). Berlin, Germany: Springer-Verlag.

*Experimental Brain Research*, 135, 12–21. [Article] [CrossRef] [PubMed]

*Experimental Brain Research*, 179, 595–606. [Article] [CrossRef] [PubMed]

*Neuroscience Research Bulletin*, 18, 457–651.

*Human visual orientation*. New York: Wiley.

*Journal of Vestibular Research*, 7, 311–345. [CrossRef] [PubMed]

*Perception*, 23, 753–762. [CrossRef] [PubMed]

*Perception*, 18, 657–665. [CrossRef] [PubMed]

*Experimental Brain Research*, 174, 528–543. [Article] [CrossRef] [PubMed]

*PLoS ONE*, 2, e943. [CrossRef] [PubMed]

*Biological Cybernetics*, 96, 389–404. [Article] [CrossRef] [PubMed]

*Regression models for categorical and limited dependent variables*. Thousand Oaks, CA: Sage.

*Experimental Brain Research*, 179, 263–290. [Article] [CrossRef] [PubMed]

*Cognitive Brain Research*, 5, 87–96. [CrossRef] [PubMed]

*Neural Computation*, 19, 3335–3355. [Article] [CrossRef] [PubMed]

*Neuroreport*, 16, 1923–1927. [CrossRef] [PubMed]

*Experimental Brain Research*, 104, 502–510. [CrossRef] [PubMed]

*Journal of Vision*, 10, (2):20, 1–18, http://www.journalofvision.org/content/10/2/20, doi:10.1167/10.2.20. [PubMed] [Article] [CrossRef] [PubMed]

*Computers & Graphics*, 33, 47–58. [CrossRef]

*Journal of Neurophysiology*, 101, 1321–1333. [Article] [CrossRef] [PubMed]

*Nature*, 336, 162–163. [CrossRef]

*Perception & Psychophysics*, 51, 443–454. [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 14, 646–660. [CrossRef] [PubMed]

*Acta Oto-Laryngologica*, 76, 24–31. [CrossRef] [PubMed]

*American Institute of Aeronautics and Astronautics*, 4334, 512–520.

*Biological Cybernetics*, 86, 209–230. [CrossRef] [PubMed]