Abstract
Purpose: We measured the dynamic properties of visual feedback control of hand orientation in a goal directed hand movement. Method: Subjects placed a cylinder onto a target surface while viewing a binocular image of the surface in a 3D virtual environment. A robot arm aligned a real target surface with the virtual surface image so that subjects actually placed the cylinder on a real surface on each trial. An optical tracking device measured the position and orientation of the cylinder throughout the movement. This data was used in real-time to render the cylinder in the virtual environment. On a small proportion of trials, we added a random perturbation to the orientation of the virtual cylinder early in the movement, either in the subjects' image plane or in depth. We used the recorded kinematic data to compute the strength and timing of subjects' corrections to these perturbations. As a baseline for comparison, we measured the same responses for similar perturbations in the target surface. On all trials, the screen flashed repeatedly for 167 ms upon movement initiation to mask the motion cues added in the perturbation trials. Subjects reported that they were unaware of the perturbations. Results: Subjects corrected for feedback perturbations in both the image plane and in depth by 30%, with an average delay of 250 ms. Subjects corrected more completely for the target perturbations (75%), but with the same 250 ms delay. Conclusions: Humans use continuous visual feedback about the orientation of the hand for online control of fast, goal-directed movements. Depth information contributes as strongly to online control as information about orientation in the image plane and with similar effective delays. Visual information about the hand is probably less salient than information about the target because of both its greater uncertainty (moving in the periphery) and the presence of other cues about hand orientation (proprioceptive and feedforward motor information).
Supported by NIH EY-13319