December 2003
Volume 3, Issue 12
Free
OSA Fall Vision Meeting Abstract  |   December 2003
Comparison of presentation modes for reading and face recognition in simulated prosthetic vision
Author Affiliations
  • Angela J. Kelley
    Johns Hopkins University School of Medicine, Ophthalmology Department, USA
  • Liancheng Yang
    Dept of Ophthalmology, Lions Vision Center, Johns Hopkins Univ School of Medicin, USA
  • David Hess
    Johns Hopkins School of Medicine, Ophthalmology Department, USA
  • Vivian Yin
    Johns Hopkins School of Medicine, Ophthalmology Department, USA
  • Gislin Dagnelie
    Johns Hopkins Univ School of Medicine, USA
Journal of Vision December 2003, Vol.3, 58. doi:10.1167/3.12.58
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angela J. Kelley, Liancheng Yang, David Hess, Vivian Yin, Gislin Dagnelie; Comparison of presentation modes for reading and face recognition in simulated prosthetic vision. Journal of Vision 2003;3(12):58. doi: 10.1167/3.12.58.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Simulations can play an important role in the development of visual prostheses. Our simulations use images viewed through small numbers (16–256) of dots 1–2 degrees in diameter, presented in a video headset with built-in eye tracking. Each dot in the raster is filled with a Gaussian intensity distribution, whose peak value represents the mean image intensity across its aperture. Three presentation modes are used for reading and face recognition testing. The first mode uses a computer mouse, which enables the subject to move the pixel raster across the stationary background. In the second, mode movement is locked to the subject's eye position through the eye-tracking system, stabilizing the raster but allowing it to move across the stationary image. The third mode permits the subject to control the movement of the background image with the mouse, while raster movement is locked to the eye position. We will demonstrate these modes of operation, show comparative results from reading and face recognition tests and discuss the implications for the design of visual prostheses for the blind.

Kelley, A. J., Yang, L., Hess, D., Yin, V., Dagnelie, G.(2003). Comparison of presentation modes for reading and face recognition in simulated prosthetic vision [Abstract]. Journal of Vision, 3( 12): 58, 58a, http://journalofvision.org/3/12/58/, doi:10.1167/3.12.58. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×