When pointing, grasping, or reaching to touch objects, humans typically make saccadic eye movements to fixate the target object just prior to or early in the hand movement (Abrams, Meyer et al.,
1990; Biguer, Jeannerod et al.,
1982; Prablanc, Echallier et al.,
1979; Prablanc, Pelisson et al.,
1986). While there is some variability in the relative timing of eye and hand movements during natural movements (Abrams et al.,
1990; Carnahan & Marteniuk,
1991; Pelz, Hayhoe et al.,
2001), depending, for example, on specific task demands, the eye typically fixates target objects just before or slightly after the beginning of hand movements but well before their completion (Frens & Erkelens,
1991; Helsen, Elliott et al.,
2000; Starkes, Helsen et al.,
2002) and then maintains fixation on the target of a hand movement until the movement has been completed, even in a sequential movement task in which subjects must touch a sequence of targets (Neggers & Bekkering,
2000,
2001,
2002).
The picture that emerges of eye–hand coordination is that the CNS usually ensures that the targets of hand movements are fixated during the entire last half (at least) of the movements so that reliable visual information is available for online control (Elliott,
1992). Because of the relative timing of saccade and hand movement initiation, the CNS has essentially the same information about target objects available to plan both eye and hand movements—that is, peripheral visual information and information stored in visual short-term memory (VSTM) from previous fixations, either to the target itself or to other objects involved in ongoing behavior. This suggests that the CNS may use a common spatial representation of targets to plan both eye and hand movements. Evidence for the hypothesis that a common spatial representation guides saccade and hand movement planning has been equivocal. Some studies find little correlation between eye and finger endpoints when endpoint variance is a result of simple variable error, even when the target was extinguished on movement onset (Biguer, Prablanc et al.,
1984; Prablanc et al.,
1979). Other studies have shown stronger correlations when endpoint variability is created by changes in illusory configurations (e.g., the Muller–Lyer illusion; Binsted, Chua et al.,
2001; Binsted & Elliott,
1999; de Grave, Franz et al.,
2006; Mack, Heuer et al.,
1985).
Here, we take a new approach to testing the common spatial command hypothesis for coordinated eye and hand movements. It is based on the finding that the CNS integrates position information from visual short-term memory (VSTM) with the immediately available peripheral visual information about a target when generating initial hand movement plans (Brouwer & Knill,
2007,
2009). The common command hypothesis predicts that information in VSTM will influence saccade plans as much as hand movement plans during the same coordinated movements. In the experiment described here, subjects naturally executed temporally coordinated saccades and hand movements in a sequential pointing task; however, their saccades were significantly less influenced by information in VSTM than were the initial kinematics of their hand movements. This suggests a decoupling of the computations driving saccade and hand movement planning, whereby separate spatial estimations contribute to the two motor plans, at least at the stage where online visual information is integrated with information from VSTM.