August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Visual vs. Auditory Landmark for Vestibular Self-motion Perception
Author Affiliations & Notes
  • Silvia Zanchi
    Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
    DIBRIS Department, University of Genoa, Italy
    Robotics Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
    Department of Psychological Sciences, Birkbeck, University of London, London, UK
  • Luigi Felice Cuturi
    Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
    Department of Cognitive, Psychological, Pedagogical Sciences and of Cultural Studies, University of Messina, Messina, Italy
  • Giulio Sandini
    Robotics Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
  • Monica Gori
    Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy
  • Elisa Raffaella Ferrè
    Department of Psychological Sciences, Birkbeck, University of London, London, UK
  • Footnotes
    Acknowledgements  This work was supported by a Bial Foundation grant (041/2020) to E.R.F. and by the MYSpace project (principal investigator: M.G.) from the European Research Council (Grant 948349). S.Z. was also supported by an UK Experimental Psychology Grant.
Journal of Vision August 2023, Vol.23, 4821. doi:https://doi.org/10.1167/jov.23.9.4821
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Silvia Zanchi, Luigi Felice Cuturi, Giulio Sandini, Monica Gori, Elisa Raffaella Ferrè; Visual vs. Auditory Landmark for Vestibular Self-motion Perception. Journal of Vision 2023;23(9):4821. https://doi.org/10.1167/jov.23.9.4821.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Spatial navigation requires us to precisely perceive our position and the spatial relationships between our own and environmental objects’ location in space. As we move through the environment, multiple cues convey congruent spatial information: indeed, we rely both on inertial vestibular self-motion information and on visual and auditory landmarks. Here we directly investigate the perceptual interaction between inertial cues and environmental landmarks. Twenty-six healthy participants sat on a chair in a darkened room, leaning on a chin rest. On each trial, to test for self-motion detection, we delivered Galvanic Vestibular Stimulation (GVS) or sham stimulation pulse (0.7 mA of amplitude and 250 ms of duration). Critically, GVS activates the peripheral vestibular organs, i.e., the otoliths and semicircular canal afferents, eliciting a self-motion sensation (a roll tilt sensation). However, the chosen stimulation parameters induce a relatively weak virtual sensation of roll rotation. To test whether self-motion sensitivity could be aided by the environmental cue, participants performed the detection task with or without external visual (LED red light) or auditory landmark (pink noise sound emitted by a loudspeaker) both placed in front of them, in different blocks of trials. Participants’ ability to detect virtual vestibular-induced self-motion sensation with and without a landmark was measured using a signal detection approach. We computed the d prime as a measure of participants’ sensitivity and the criterion as an index of their response bias. Results showed that the sensitivity to detect self-motion was higher in the presence of the visual landmark, but not in the presence of the auditory one. The response bias remained unaffected. This finding shows that visual signals coming from the environment provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×