Abstract
Past research shows that the perceived onset of vestibular cues to self-motion are delayed compared to other senses. However, most research has been conducted with closed eyes, omitting visual information which is also an important self-motion cue. Previously we found that the perceived onset of active head movement paired with sound does not change when visual cues to self-motion are available (Chung & Barnett-Cowan, Exp Brain Res 235. 3069–79). Here we extend this work by investigating whether the perceived timing of passive self-motion paired with sound changes when visual cues to self-motion are available. Participants performed a temporal order judgement task between passive whole-body rotation and an auditory tone at various stimulus onset synchronies (−600 to 600 ms). Rotations were presented on a motion platform (raised-cosine trajectory; 0.5 Hz and 1 Hz; 20 deg/s peak velocity). A virtual forest environment was created in Unreal Engine (version 4.6) and presented using the Oculus Rift CV1 head mounted display (HMD). As a secondary goal of the study, the rotational gain of the visual scene relative to the rotation of the HMD was manipulated (+0.5, +1, + 2, −1). Preliminary results from six participants replicates previous reports that vestibular stimuli must occur before an auditory stimulus in order to be perceived as occurring simultaneously, where a greater delay is found when passively rotated at 0.5 Hz compared to 1 Hz. We found a significant main effect of the visual gain manipulation at 0.5 Hz but not 1 Hz and a significant interaction between movement frequency and the rotational gain of the visual scene. While the results suggest that the presence of visual feedback may have a modulating effect on the perceived timing of passive whole-body rotation, the presence of visual cues to self-motion do not reduce the perceived delay for the onset of self-motion.
Acknowledgement: We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research and support from Ontario Research Fund and Canadian Foundation for Innovation (#32618), NSERC (RGPIN-05435-2014), and Oculus Research grants to Michael Barnett-Cowan. William Chung was supported by the Queen Elizabeth II Graduate Scholarship in Science and Technology.