Purchase this article with an account.
David Luebke; Computational Display for Virtual and Augmented Reality. Journal of Vision 2017;17(10):910. doi: https://doi.org/10.1167/17.10.910.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Wearable displays for virtual & augmented reality face tremendous challenges, including: Near-Eye Display: how to put a display as close to the eye as a pair of eyeglasses, where we cannot bring it into focus? Field of view: how to fill the user's entire vision with displayed content? Resolution: how to fill that wide field of view with enough pixels, and how to render all of those pixels? A "brute force" display would require 10,000×8,000 pixels per eye! Bulk: displays should be as unobtrusive as sunglasses, but optics dictate that most VR displays today are bigger than ski goggles. Focus cues: today's VR displays provide binocular display but only a fixed optical depth, thus missing the monocular depth cues from defocus blur and introducing vergence-accommodation conflict. To overcome these challenges requires understanding and innovation in vision science, optics, display technology, and computer graphics.
I will describe several "computational display" VR/AR prototypes in which we co-design the optics, display, and rendering algorithm with the human visual system to achieve new tradeoffs. These include light field displays, which sacrifice spatial resolution to provide thin near-eye display and focus cues; pinlight displays, which use a novel and very simple optical stack to produce wide field-of-view see-through display; and a new approach to foveated rendering, which uses eye tracking and renders the peripheral image with less detail than the foveal region. I'll also talk about our current efforts to "operationalize" vision science research, which focuses on peripheral vision, crowding, and saccadic suppression artifacts.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only