Abstract
Previous research showing different kinematics for visually guided and memory-guided grasping suggests that there are two control systems for object directed action. Visually guided grasping relies on a highly accurate real-time system in the dorsal stream, whereas memory-guided grasping relies on less accurate information from the perception-based system in the ventral stream. In the present study we explored this difference further by combining a primary grasping task consisting of interleaved visually guided and memory-guided trials, with a secondary auditory perceptual task. In the primary task, participants were cued by an auditory tone to grasp 3-D target objects of varying size. On half of the trials, targets were visible during the interval between the auditory cue and movement onset (visually guided). On the remaining trials, targets were occluded from view at the time of the auditory cue (memory-guided). In the second task, participants listened to object names presented via headphones and gave a vocal response when the object was a particular shape (20% probability). There were three conditions: 1) grasping in conjunction with the auditory task, 2) grasping alone, and 3) the auditory task alone. The results showed that memory-guided grasping was associated with larger peak grip aperture than visually guided grasping. As well, the introduction of the competing shape-classification task slowed manual reaction time for both types of grasping. Most importantly, however, the time taken to execute the movement was slowed by the auditory task, and this effect was larger for the memory-guided trials compared with visually guided trials. Furthermore, the vocal response times were slower on memory-guided as compared to visually guided trials. These results provide further support for the idea that memory-guided grasping relies on the processing of stored perception-based information that taps the same cognitive resources as an auditorily-presented shape discrimination task.