Previous work by Peli and colleagues has demonstrated the value of using digital processing techniques to simulate the appearance of images to low vision observers. Unlike images however, real scenes often have high dynamic ranges of intensities that can produce serious visual impairments in people with low vision. Existing image processing techniques cannot simulate these effects. In this work we present a new algorithm for simulating the appearance of high dynamic range scenes to normal and low vision observers. As input, the algorithm takes a stream of high dynamic range images captured by a digital camera or generated by a computer graphics system. The images are processed through a computational model of vision that accounts for the changes in glare, contrast sensitivity, color appearance, acuity, and visual adaptation that occur under varying illumination conditions for normal and low vision observers. The output is a stream of low dynamic range images that show a display observer what the scene would look like to the scene observer. To demonstrate the utility of the algorithm we generate image sequences that simulate the dramatic differences in glare susceptibility, contrast visibility, and dark and light adaptation that are experienced by young and old observers in high dynamic range scenes.
This work was supported by NSF grant IIS-0113310 and the Cornell Program of Computer Graphics.