Abstract
Virtual Reality (VR) represents a paradigm shift in terms of user experience and immersion within a virtual environment. Van Dam, Li & Ernst (2012;2016) investigated how proprioceptive and visual information are integrated over time, leading to a perceived lag and increased precision, in a distal display. We extend this approach to investigate how visual information is integrated over time in VR and whether the increased latent information, such as parallax and binocular depth cues., plays a role in participant performance. Two separate experiments were run. In both experiments participants were placed in a VR space, using an Oculus headset, where a target moved horizontally for variable durations of time at approximately 1m from eye level. The target could either be reliable (single dot) or unreliable (Gaussian dot cloud) in terms of its position. Participants were asked to judge whether the last target position was to the left or right of a comparison stimulus shown after target disappearance. In the first experiment the target stimulus remained consistent in its display (Continuous Movement condition). In the second experiment the target stimulus was displayed intermittently (Intermittent Movement condition, see Van Dam, Li, Ernst, 2016), for a duration of 200ms every 500ms, thus decreasing the reliability of the target velocity. Results show that for target judgements in the Intermittent Movement condition, participants integrated position information over time, leading to a perceived lag of the unreliable target in space. This confirms previous results by Van Dam et al. (2016). For the Continuous Movement condition there was no difference in lag between unreliable and reliable targets, indicating that participants use the velocity signal to predict target position. For the unreliable target in this case we found a significant improvement in perceptual precision, indicating that information is integrated over time.
Meeting abstract presented at VSS 2018