Abstract
We have developed a low-vision navigation algorithm that guides a user through an unfamiliar building to a goal state using an ideal observer algorithm. The algorithm uses a map of the environment and distance measurements from the user to the nearest wall to localize the user's state (position and orientation) and guide them to their goal state. Using the measurements the algorithm computes the set of states that the user could be located (a belief vector). Using the belief vector the algorithm computes the action (translation, rotation or measurement) that minimizes the number of actions to reach the goal state with no remaining uncertainty. To evaluate the efficacy of this algorithm we tested subjects in randomly generated virtual reality indoor environments under three conditions: Normal Vision (NV), Simulated Low Vision (SLV) and Simulated Low Vision with the Navigation Aid (SLV+NA). The environment contained no landmarks other than numbered signs on the walls. Subjects were placed at a random location and instructed to reach a randomly selected goal (sign) in the shortest distance possible. In the SLV condition, fog was added to the environment to simulate low vision. The fog prevented subjects from seeing beyond the next hallway intersection. In the SLV+NA condition subjects navigated with fog and were provided auditory instructions from the navigation aid. We measured the distance traveled to reach the goal state for each condition. In the NV condition subjects traveled an average of 10.9 meters (SEM 1.52). In the SLV condition subjects traveled 20.35 meters (SEM 3.53). In the SLV+NA condition subjects traveled an average of 7.0 meters (SEM 0.712). There was a significant improvement in performance in the SLV+NA condition over the SLV condition. These results suggest that a navigation aid that uses simple distance measurements with an optimal way-finding algorithm may prove to be a useful low-vision navigation aid.