Purchase this article with an account.
Nick Barnes, Paulette Lieby, Hugh Dennet, Janine Walker, Chris McCarthy, Nianjun Liu, Yi Li; Investigating the role of single-viewpoint depth data in visually-guided mobility. Journal of Vision 2011;11(11):926. doi: https://doi.org/10.1167/11.11.926.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Background: Depth information is critical for navigation, and may be recovered visually via multiple cues. However, some cues may only be accessed by deliberate action (e.g., parallax and visual motion) rather than from a single viewpoint (e.g., disparity). At low visual resolution with sparse representation (35 × 30 pixels over 100 degrees), cues such as disparity are limited. Current retinal implants for the visually impaired are around or below this resolution. We investigated the importance of depth being available from a single viewpoint for navigation. An experiment was designed to compare a depth representation to intensity (the standard representation for retinal implants) for visual navigation. In Depth, brightness represents environment depth of the corresponding visual field. In Intensity, brightness represents luminance. Methods: Four normally-sighted participants navigated an indoor mobility course comprising white walls, dark floor, and contrasting obstacles over multiple trials. Participants wore head-mounted stereo cameras to collect visual information. This was processed to create phosphenized depth or intensity representations and presented via a head mounted display. Course traversals were measured as percentage of preferred walking speed (PWS), against a baseline traversal with high-resolution images. Results: Both depth- and intensity-based representations were effective for visually-guided navigation: participants walked significantly faster than 40% of their PWS, walking speed was significantly faster with Intensity than Depth. Presence of obstacles had a differential effect on depth- and intensity-based navigation. When suspended obstacles were present, participants walked significantly slower with Intensity, while no significant difference was evident for Depth, compared to performances in the no-obstacle environment: this difference was significant. Conclusions: These results demonstrate that humans can navigate using a purely depth-based representation of the environment, and suggest that it may be advantageous to have access to depth from a single viewpoint in some situations.
This PDF is available to Subscribers Only