People can move their hands to a previously seen target with closed eyes by using the target's memorized location; however, most hand movements are made under visual guidance. Visual information about a target object is typically available both for planning movements and for controlling them online. When performing complex tasks, humans appear to coordinate eye movements with hand movements to maximize the available visual information to guide hand movements and to minimize any reliance on memory (Ballard, Hayhoe, & Pelz,
1995). It has even been argued that only very little information is stored across saccades (Henderson & Hollingworth,
1999; Irwin,
1991) and that instead of relying on stored information, the “world is used as an external memory” (O'Regan,
1992; Rensink,
2000). When considered in the context of the spatial information needed to plan and guide hand movements, this might appear to make sense because remembered target location information is old and, in a changing world, possibly not correct anymore. Even when nothing has changed, memorized location information might be expected to be more uncertain than visual information, for example, because of noise introduced when memorized location is remapped to correct for eye movements (Henriques, Klier, Smith, Lowy, & Crawford,
1998) or is remapped relative to a more stable reference frame right away (Andersen, Essick, & Siegel,
1985; Sparks & Mays,
1983). These and other drawbacks could explain the lower precision of movements toward remembered than to visual locations (Binsted, Rolheiser, & Chua,
2006; Heath, Westwood, & Binsted,
2004). They might also explain the finding that subjects make repeated eye movements in natural eye–hand tasks (Ballard et al.,
1995) or scene comparison tasks (Gajewski & Henderson,
2005) and that visual search may operate without the use of memory (Horowitz & Wolfe,
1998).