Abstract
Despite evidence for limitations on memory across saccadic eye movements, a variety of recent results suggest that information about the spatial structure in a display is retained, and can facilitate visual search and saccade programming. We have demonstrated such facilitation of saccade programming from prior views in a 3-D virtual environment, in a task where observers copy a simple model. In this task, observers make large, coordinated movements of eye, head, and hand from right to left and back, across the display, in order to pick up the model components and place them in the copy. We examined the relative latency of eye, head, and hand for the movements following pickup, and following placement of a piece. We found that head and hand movements both precede the eye by 150–200 msec (head) or 200–400 msec (hand), depending on the direction of movement. This is a substantially longer lead time than is observed with single movements. Thus the advantage afforded by the use of spatial memory may be to allow early initiation of the hand and head movements, which are much slower than the eye. The importance of visual spatial memory is that it allows planning and consequently coordination of the movements of the different effectors. In addition, the head and hand latencies were well correlated on a movement-by-movement basis. The variance accounted for by the correlation was 0.6 in the movements following pickup, and 0.4 following putdown. Thus there is a pronounced tendency for head and hand movements to be temporally linked. This supports the conjecture (Flanders et al, Exp Brain Res, 1999) that the head might be used as a stable reference frame for the hand when the body is moving.