Abstract
Reduction of end-to-end latency ('motion-to-photons') is critical for convincing, high-fidelity virtual reality and for scientific uses of VR that mimic real world interactions as closely as possible. We measured the end-to-end latency of a real-time infra-red camera-based tracking system (Vicon), with rendering on a standard graphics PC and using a head mounted display (nVis SX111 HMD). A 100Hz camera captured both a tracked 'wand' and the rendered object (a sphere) on the display screen as the wand was moved from side to side. Cross-correlation of the centroid positions of the tracked wand and rendered sphere allowed us to calculate the end-to-end latency of the system for different displays. With our HMD (LCD display), this was about 40ms (± 2ms) whereas for a CRT it was 30ms. Because our display was refreshed at 60Hz and rendering time was less than 16.6ms, we could wait for the latest possible Vicon tracker coordinate (available at 250Hz) before rendering the next frame and swapping buffers. This reduced latency by 9ms (to 31ms). In a psychophysical experiment, we showed that a reduction in latency of this magnitude was easily detectable. Three observers waved a wand, rendered as a multi-faceted ball and, in a forced-choice paradigm, identified whether the latency between hand movement and rendered stimulus movement was 'high' or 'low' (50% of trials were of each type; 4 practice trials including both types preceded each run). We varied the latency difference by a combination of (i) adding artificial latency to one stimulus and (ii) minimizing the latency of the shorter latency stimulus. Plotting d' against log latency difference and fitting a straight line showed that the threshold difference (d' = 1) was less than 4ms for all participants. This corresponds to a remarkably low Weber fraction of about 10%.
Meeting abstract presented at VSS 2017