Abstract
Using the movement kinematics for a simple object placement task, we have estimated the time-varying weights that subjects give to binocular and monocular cues to 3D surface orientation when planning and guiding a simple goal-directed hand movement. Subjects viewed a textured disk in a 3D virtual environment with full binocular vision. They were asked to move a cylinder from a starting platform and to place it flush onto the target surface. An optical tracking system was used to measure the position and orientation of the cylinder in real-time. The moving cylinder was rendered in the virtual display in alignment with the cylinder actually being moved by subjects. On each trial, a real target surface, invisible to subjects, was positioned and oriented in alignment with the virtual target surface by a robot arm. Target surfaces at a range of slants (from 16° to 44° away from the fronto-parallel) were presented randomly from trial-to-trial. in a first experiment, binocular cues (disparity) and monocular cues (outline shape and texture) were made to suggest slightly different slants. Analysis of subjects movements showed that they effectively weighted binocular cues much more heavily at low slants than at high slants, where they gave more weight to monocular cues. A second experiment dissociated the relative contributions of the depth cues to motor planning and on-line control of the movements. For an average slant of 35•, when the cue conflicts were induced prior to movement onset, so the information affected motor planning, subjects movements were influenced more heavily by monocular information than stereo information. When the conflicts were induced by perturbing the visual information during the movement, subjects' corrective movements were more heavily influenced by stereo. The results suggest that on-line control of reaching movements relies more heavily on stereo depth cues than does motor planning.