Abstract
>1 Introduction Latency mitigation remains a fundamental issue in head-tracked virtual reality systems. Although predictive compensation can reduce perceived latency, other visual artifacts may appear due to improper calibration. The present study examines the discriminability of these prediction artifacts during active head rotation in a two-alternative forced-choice detection task. 2 Method A virtual environment (VE) was rendered through a custom graphics engine and displayed using an HTC Vive virtual reality headset. Predictive compensation was manipulated to overshoot (display lag) or undershoot (display lead) the estimated end-to-end latency by up to 80 milliseconds, leading to concomitant changes in image stability. Participants yawed their heads back-and-forth in time with a metronome and were asked to judge whether sequentially presented VE conditions were perceptually equal in terms of image stability. Overshoot and undershoot trials were presented in random order across two interleaved staircases following a three-down, one-up adaptive procedure. 3 Results Psychometric estimates for the point of subjective equality (PSE) and just noticeable difference (JND) were computed based on results from five subjects. Psychophysical estimates for the absolute threshold, PSE, and JND were consistent with previous findings, but showed greater sensitivity for undershoot trials compared to overshoot trials. 4 Conclusion In general participants were most sensitive to prediction artifacts that manifested in terms of display lead. These findings are among the first to explicitly compare the effects of display lead and display lag on judgments of perceptual stability in a wide field of view virtual reality head-mounted display.
Meeting abstract presented at VSS 2017