Purchase this article with an account.
Steven L. Prime, Jonathan J. Marotta; Gaze strategies during visually-guided and memory-guided grasping. Journal of Vision 2011;11(11):967. https://doi.org/10.1167/11.11.967.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Vision plays a crucial role in guiding motor actions. Previous work in our laboratory has shown that initial gaze position is tightly linked to eventual grasp position, specifically of index finger placement during a precision grasp on symmetrical objects (Desanghere & Marotta, 2008). In contrast, perceptual tasks reveal gazes falling closer to the centre of mass (COM) when subjects look at computer-generated objects. But many grasping actions can be performed using our memory of the object's shape and location to guide our actions. For example, imagine glancing at the coffee mug on your desk but focusing you attention back on your computer screen as you reach out to grab your mug. Where do we look at objects to collect visual information about them when the objects are a target for future memory-guided grasps? This study was aimed at addressing this issue. Subjects reached out and grasped centrally placed symmetrical blocks under either closed-loop (visually-guided) or open-loop (memory-guided) conditions. In the memory-guided condition, subjects were shown the block for 1 s, controlled by shutter glass, and then prompted to make an open-loop grasp either immediately after the shutter closed or after a 2 s delay. Results show peak hand velocity was fastest during closed-loop reaches and slowest during open-loop reaches. Open-loop grasps were more rightward of the blocks' COM relative to closed-looped grasps. Initial gaze fixations during closed-looped grasps were directed to the top of the blocks corresponding to the index finger's grasp point, suggesting gaze targets future grasp points during the planning of the grasp. In the memory-guided condition, subjects spent more time looking closer at the centre of the block, suggesting subjects analyse the block's overall shape to build a holistic perceptual representation for open-looped actions.
This PDF is available to Subscribers Only