Abstract
While both biomechanical and moving auditory cues have been shown to elicit self-motion illusions (“circular vection”), their combined influence has never been investigated before. Here, we tested the influence of biomechanical vection (participants were seated stationary above a platform rotating at 60°/s and stepped along) and auditory vection (binaural recordings of two sound sources rotating at 60°/s) both in isolation and together. All participants reported biomechanical vection after a mean onset latency of 33.5s. Interestingly, even though auditory cues by themselves proved insufficient to induce vection in all but one participant, adding rotating sounds significantly enhanced biomechanical vection in all dependent measures: Vection onset times were decreased by 35%, vection intensity was increased by 32%, and participants had a stronger sensation of really rotating in the actual lab (28% increase). In fact, participants were able to update their orientation in the lab in all but the pure auditory condition, suggesting that their mental representation was directly affected by the biomechanical and auditory cues - although perceived self-rotation velocities were typically below the stimulus velocities. Apart from its theoretical relevance, the current findings have important implications for applications in, e.g., entertainment and motion simulation: While spatialized sound seems not by itself sufficient to induce compelling self-motion illusions, it can clearly support and facilitate biomechanical vection and has earlier been shown to also facilitate visually induced circular vection (Riecke et al., 2005, 2008) and thus support information from other modalities. Furthermore, high-fidelity, headphone-based sound simulation is not only reliable and affordable, but also offers an amount of realism that is yet unachievable for visual simulations: While even the best existing visual display setups will hardly be confused with “seeing the real thing”, headphone-based auralization can be virtually indistinguishable from listening to the real sound und thus can provide a true “virtual reality”.
Support: NIMH Grant 2-R01-MH57868, NSF Grant 0705863, Vanderbilt University, Max Planck Society, Simon Fraser University.