Purchase this article with an account.
Angela J. Kelley, Liancheng Yang, David Hess, Vivian Yin, Gislin Dagnelie; Comparison of presentation modes for reading and face recognition in simulated prosthetic vision. Journal of Vision 2003;3(12):58. https://doi.org/10.1167/3.12.58.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Simulations can play an important role in the development of visual prostheses. Our simulations use images viewed through small numbers (16–256) of dots 1–2 degrees in diameter, presented in a video headset with built-in eye tracking. Each dot in the raster is filled with a Gaussian intensity distribution, whose peak value represents the mean image intensity across its aperture. Three presentation modes are used for reading and face recognition testing. The first mode uses a computer mouse, which enables the subject to move the pixel raster across the stationary background. In the second, mode movement is locked to the subject's eye position through the eye-tracking system, stabilizing the raster but allowing it to move across the stationary image. The third mode permits the subject to control the movement of the background image with the mouse, while raster movement is locked to the eye position. We will demonstrate these modes of operation, show comparative results from reading and face recognition tests and discuss the implications for the design of visual prostheses for the blind.
This PDF is available to Subscribers Only