Abstract
Using head-mounted virtual reality systems in which haptic feedback is provided by matching objects of the real world with objects of the virtual word, the demand on the accuracy of the mapping between virtual and real space depends on the accuracy of the visual-motor mapping of the user's sensorimotor system. Using a system that consists of an Oculus DK2 head-mounted display and the LEAP motion controller, by which participants can see renderings of their hands, we probed the tolerance of participants to distortions of the mapping between motor space and visual space. Participants were asked to keep their open hands symmetrically in front of them such that the two thumbs were close, but without touching each other. We then manipulated the visual-motor mapping in two different ways by either introducing a linear, homogenous translation of both hands, or a nonlinear transformation, which corresponds to a compression or expansion of the space between the two hands. Using this technique we moved their hands in one of six (2 x lateral, anterior-posterior, vertical) directions and asked them to indicate which one it was. The detection threshold was determined as the displacement at which they were correct in 58% (1/6 + 0.5* 5/6) of the cases. A 2x3 ANOVA (condition x direction) revealed a main effect of condition (F(1,54)=75, p< 0.001). Participants are more sensitive detecting the relative displacement of the hands (4 cm) than their absolute location in space (5.3 cm). Knowing detection thresholds informs the design of haptic devices for mixed VR since it determines the tolerance of users to the amount of displacements between real and virtual objects. The results also suggest that the coordination of relative positions of hands is more accurate compared to the absolute .location.
Meeting abstract presented at VSS 2018