Abstract
Last year at VSS we reported that the paths of more than half our subjects were altered when they performed an open-loop walking task to a goal surrounded by an off-center frame. Using an immersive virtual (VR) environment, we presented, for 1s, a scene of a goal surrounded by either a center or off-center frame. The purpose of the current study is to determine whether results obtained with a VR scene reflect real world behavior. To that end, a real-world replica was modeled after the original virtual scene and a similar procedure was used. Each of the subjects (n=12) was asked to walk to the goal in 3 different worlds: the real world (RW), the virtual world (VW), and the real world viewed through cameras (CW) using the VR headset. Worlds were blocked, and the sequence in which the worlds were presented was counterbalanced across subjects. Walking paths were measured, and the influence of the frame position on the path as well as the endpoint accuracy and precision were calculated. We performed a mixed-design ANOVA on each of the 3 measures. The results showed a significant interaction between world and its position in the sequence for all measures (p<.01). When displayed first in the sequence, the VW had path deviations due to the off-center frame of 7.6 compared to −0.7 and 0.7 in the RW and CW. In the second and third blocks of the sequence, the performance difference between the worlds decreased to an average of 1.2 . Measures of accuracy and precision also showed similar interaction effects. The similarity in RW and CW performance indicates that the VW effect cannot be explained by the headset, but rather is due to the nature of the visual stimuli. These results indicate that untrained VR performance does not accurately reflect real-world performance and that task experience can significantly decrease or eliminate the difference between the two. This underscores the importance of providing prior experience when using VR to study real world behavior.