Abstract
Introduction: Prosthetic vision offers the possibility of rudimentary vision restoration for blind individuals. Due to the limitations of current devices, such as low resolution and perceptual distortions, simplifying the visual scene is crucial; for instance, by segmenting it into objects belonging to different semantic categories and then rendering either their outlines (“smart edge detection”) or one category at a time (“smart rastering”). Here we evaluate these scene simplification strategies for wayfinding using simulated prosthetic vision in immersive virtual reality. Methods: We engaged 24 sighted participants (14 females, 10 males; ages 18-40) to navigate a virtual town square as "virtual patients." They used one of three rendering modes: naive edge detection, smart edge detection (outlining people, bikes, and buildings), or smart rastering (displaying these outlines category-wise). Each participant had 45 seconds to traverse the town square, avoiding stationary obstacles and moving cyclists. Performance metrics included path tracking, collision count, and success rate. After each session, participants rated the difficulty of each rendering mode. Results: Success rates improved from 39% with naive edge detection to 41% with smart rastering and 47% with smart edges. The smart modes reduced collisions, mainly with stationary objects (linear mixed-effects model, p<.01), but did not enhance safety on the bike path. Participants found the smart modes easier than the naive method. Conclusion: Smart edge detection and rastering improved wayfinding success rates and reduced collisions in this immersive task. However, less than half of the trials were successful, indicating a need for better scene simplification strategies. Future research should aim at enhancing judgment of moving objects' speed, direction, and approach time.