August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Making the Invisible Visible: Crossmodal Perception in Patients with Low Vision
Author Affiliations & Notes
  • Ailene Y. C. Chan
    California Institute of Technology, Division of Biology and Biological Engineering
  • Noelle R. B. Stiles
    California Institute of Technology, Division of Biology and Biological Engineering
    University of Southern California, Department of Ophthalmology
  • Armand R. Tanguay, Jr.
    California Institute of Technology, Division of Biology and Biological Engineering
    University of Southern California, Departments of Electrical Engineering, Chemical Engineering and Materials Science, Biomedical Engineering, Ophthalmology, and Physics and Astronomy; Neuroscience Graduate Program
  • Shinsuke Shimojo
    California Institute of Technology, Division of Biology and Biological Engineering
  • Footnotes
    Acknowledgements  Croucher Scholarships for Doctoral Study, Dominic Orr Graduate Fellowship in BBE, The National Institutes of Health, The National Eye Institute
Journal of Vision August 2023, Vol.23, 4840. doi:https://doi.org/10.1167/jov.23.9.4840
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ailene Y. C. Chan, Noelle R. B. Stiles, Armand R. Tanguay, Jr., Shinsuke Shimojo; Making the Invisible Visible: Crossmodal Perception in Patients with Low Vision. Journal of Vision 2023;23(9):4840. https://doi.org/10.1167/jov.23.9.4840.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Cross-modal information is often complementary, where senses across domains blend seamlessly to create a holistic perceptual experience. When one of the senses is impaired, there may be a potential reweighting of incoming sensory information, thus modifying cross-modal connections. We investigated audio-visual interactions across visual field locations in patients with partial vision loss as compared to the neurotypical population. We utilized the classic Double Flash Illusion as well as a visual flash detection task, which we tested across 24 stimuli locations – eight equally spaced circumferentially at 5, 10, and 15 degrees from central fixation. With each eye tested separately (order counterbalanced), we obtained each participant’s visual field map (>80% successful flash detection in each location) and double flash perception map (percentage of double flash(es) perceived in each location). Four low vision patients and seventeen neurotypical participants (with normal visual perception) were tested. The causes of visual impairment among low vision patients include trauma, congenital cataracts, retinitis pigmentosa, NAION, and microphthalmia. Intriguingly, low vision and neurotypical participants’ performance in the visual flash detection and double flash tasks were comparable in visible locations (80% successful flash detection). However, the “invisible” locations (<50% successful flash detection), which fall on the blind spots of neurotypical participants and areas with visual impairment among low vision participants, exhibited significantly stronger double flash perception in participants with visual loss relative to neurotypical participants. Participants’ visual flash detection performance were comparable, meaning that the two groups are “equally blind” in these locations. Stronger double flash perception in the low vision cohort could be generated by diminished visual responses, resulting in auditory information being weighted more strongly. This pilot data suggests a potential cross-modal remodification during and following years of partial vision loss, strengthening cross-modal integration in visual field locations with impaired vision.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×