Abstract
Purpose: Previous work has shown that humans continuously use visual feedback about the position and movement of the hand to control goal-directed hand movements online. In these studies, visual error signals were predominantly in the image plane and thus were available in an observer's retinal image. We investigated how humans use visual feedback about finger depth provided by binocular disparities alone to control pointing movements. Methods: In a calibrated, virtual reality environment, subjects were asked to move their fingers from a starting point on the right hand side of the virtual space to point to and touch a target ball that appeared at a random position on the left-hand side of the virtual display 30 cm away from the starting position. A fixed platform was co-aligned with the starting position, while a target ball positioned on a robot arm was co-aligned with the visual target on each trial. Visual feedback of the unseen finger, whose position was recorded at 120Hz using an Optotrak system, was provided in real time by a rendered finger. On 1/3 of the trials, the position of the virtual fingertip was perturbed 1cm either along the line of sight in depth or in the image plane behind a virtual occluder positioned 1/3 of the way between start and target. Results: All subjects corrected for the in-depth perturbations as well as the in-the-image-plane ones. Mean corrections were 50% of the size of the perturbations in both directions, but corrections for perturbations in depth were slower than corrections for perturbations in the image plane (167 vs. 117 ms reaction times). Conclusions: Based on the large decrease in stereo acuity, it is likely that an optimal estimator integrating uncertain visual feedback over time can account for the apparent difference in delays.