Purchase this article with an account.
Piti Irawan, James A. Ferwerda, Stephen R. Marschner; Simulating low vision in high dynamic range scenes. Journal of Vision 2004;4(8):879. doi: https://doi.org/10.1167/4.8.879.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Previous work [Peli91] has demonstrated the value of using digital processing techniques to simulate the appearance of images to low vision observers. However, unlike images, real scenes often have high dynamic ranges of intensities that can produce serious visual impairments in people with low vision. Existing image processing techniques cannot simulate these effects. In this work we present a new algorithm for simulating the appearance of high dynamic range scenes to normal and low vision observers. As input, the algorithm takes a stream of high dynamic range images captured by a digital camera or generated by a computer graphics system. The images are processed through a computational model of vision that accounts for the changes in glare, contrast sensitivity, color appearance, acuity, and visual adaptation that occur under varying illumination conditions for normal and low vision observers. The output is a stream of low dynamic range images that show a display observer what the scene would look like to the scene observer. To demonstrate the utility of the algorithm we generate image sequences that simulate the dramatic differences in glare susceptibility, contrast visibility, and dark and light adaptation that are experienced by young and old observers in high dynamic range scenes.
This PDF is available to Subscribers Only