Abstract
We can distinguish between two different kinds of visual information that subjects can use to guide their hand and fingers, and that impose different computational requirements on visuo-motor processes.
When we pick up objects, we can move our hands towards visually perceived target locations. Tasks like this can in principle be performed by establishing correspondences between target and hand locations in visual and motor space. Thus, it is sufficient if visuo-motor processes compute representations of target and hand locations in visual and motor space and correspondences between them. It is not necessary, however, that visuo-motor processes compute metric representations of target distances. In contrast, when we move our hands in the absence of a visual target location, for example when manually indicating the extent of an object, the task requires production of movements that match visually perceived distances. In this situation, visuo-motor processes have to compute metric representations of target distances.
The experiment reported here used an adaptation paradigm to test if observers rely on different visuo-motor systems in tasks that require the representation of metric distances and tasks that do not.
In an adaptation phase, observers were presented with distorted visual feedback on their hand movements. In a testing phase (no visual feedback), we measured how behavior changes in response to the distorted feedback. We used two tasks in testing and adaptation. One task required observers to move their hand to a visual target location. The other required observers to move their hand over a target distance in the absence of a visual target location. The results show, that behavioral changes are significantly larger when the same task is used during testing and adaptation, compared to when the task is switched.
The findings suggest that human observers have two partially independent visuo-motor systems with different computational principles.