September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Enhancing wayfinding in simulated prosthetic vision through semantic segmentation and rastering
Author Affiliations & Notes
  • Tori N. LeVier
    University of California, Santa Barbara
  • Justin Kasowski
    University of California, Santa Barbara
  • Michael Beyeler
    University of California, Santa Barbara
  • Footnotes
    Acknowledgements  This work was supported by the National Library of Medicine of the National Institutes of Health (DP2-LM014268 to MB). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Journal of Vision September 2024, Vol.24, 1327. doi:https://doi.org/10.1167/jov.24.10.1327
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tori N. LeVier, Justin Kasowski, Michael Beyeler; Enhancing wayfinding in simulated prosthetic vision through semantic segmentation and rastering. Journal of Vision 2024;24(10):1327. https://doi.org/10.1167/jov.24.10.1327.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Prosthetic vision offers the possibility of rudimentary vision restoration for blind individuals. Due to the limitations of current devices, such as low resolution and perceptual distortions, simplifying the visual scene is crucial; for instance, by segmenting it into objects belonging to different semantic categories and then rendering either their outlines (“smart edge detection”) or one category at a time (“smart rastering”). Here we evaluate these scene simplification strategies for wayfinding using simulated prosthetic vision in immersive virtual reality. Methods: We engaged 24 sighted participants (14 females, 10 males; ages 18-40) to navigate a virtual town square as "virtual patients." They used one of three rendering modes: naive edge detection, smart edge detection (outlining people, bikes, and buildings), or smart rastering (displaying these outlines category-wise). Each participant had 45 seconds to traverse the town square, avoiding stationary obstacles and moving cyclists. Performance metrics included path tracking, collision count, and success rate. After each session, participants rated the difficulty of each rendering mode. Results: Success rates improved from 39% with naive edge detection to 41% with smart rastering and 47% with smart edges. The smart modes reduced collisions, mainly with stationary objects (linear mixed-effects model, p<.01), but did not enhance safety on the bike path. Participants found the smart modes easier than the naive method. Conclusion: Smart edge detection and rastering improved wayfinding success rates and reduced collisions in this immersive task. However, less than half of the trials were successful, indicating a need for better scene simplification strategies. Future research should aim at enhancing judgment of moving objects' speed, direction, and approach time.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×