Abstract
Detecting sensory conflict implies a comparison process akin to cross-modal discrimination which can be modelled as a subtraction of sensory estimates resulting in a difference distribution. This distribution has mean equal to the difference of compared signal means and variance equal to the sum of compared signal variances. The mean provides an estimate of the amount of conflict and the variance determines a limit on cross-modal discrimination (and conflict detection). Here we show that visual-vestibular discrimination performance approaches this limit when cues are presented sequentially. However, when cues are presented simultanouesly, cross-modal discrimination is impaired. Experiments were conducted using a virtual reality set-up consisting of a hexapod motion platform and stereo visual display. The stereo visual stimulus consisted of red spheres (diameter = 0.6 cm; density = 0.004 spheres/cm³; clipping planes: near = 25 cm, far = 65 cm) and a head-fixed fixation point. Yaw rotations had a raised cosine velocity profile of constant duration (0.8 s) and displacement of 4 deg for the reference movement (peak velocity = 10 deg/s). Two single-cue conditions (2IFC) measuring visual and vestibular self-motion variance by means of an adaptive staircase procedure were the basis for predictions about variances in two additional conditions in which visual stimuli had to be compared with vestibular stimuli either sequentially (2IFC) or simultaneously. In the simultaneous condition subjects had to indicate whether the simultaneously presented visual scene moved with or against their own physical motion in world coordinates. We hypothesize that impaired discrimination during simultaneous presentation occurs because temporal co-occurrence increases the estimated probability that signals have a common origin, which leads to optimal cue integration at the cost of impaired conflict detection. We present a probabilistic model of these processes.
Meeting abstract presented at VSS 2016