Abstract
Object-oriented actions require the computation of egocentric (subject reference) and allocentric (object reference) spatial features. However, systematic biases in the estimation of object distance and size occur when the visual feedback of the hand and the haptic feedback of the object are absent. In the present study, we investigated whether training with feedback about object position, with or without feedback about object size, calibrates object-oriented actions. In four experiments we combined grasping and reaching tasks with egocentric and allocentric feedbacks: i) grasping task with vision of the thumb, ii) grasping task with vision of both the thumb and the index finger, iii) grasping task with vision of the thumb and tactile feedback of both the thumb and index fingers, iv) reaching task with vision and the tactile feedback of the thumb. All experiments were divided into three separate blocks: pre-training (vision of the object only), training (one of the different feedback conditions), post-training (vision of the object only). Objects were random-dot elliptic cylinders with varying relative depth (20, 40 mm) rendered in stereo and presented at different viewing distances (420, 470, 520 mm) with consistent vergence and accommodative information. We analyzed the terminal hand position and the terminal grip aperture before and after training. The accuracy of the terminal hand position improved in the first and fourth experiment after training with only egocentric feedback. In particular, this calibration was more effective in the reaching than in the grasping task. On the contrary, no effect on the transport component was found in the second and third experiment, in which both egocentric and allocentric feedbacks were provided. None of the training blocks calibrated the terminal grip aperture. These findings suggest that the simultaneous presence of egocentric and allocentric feedbacks hinders, instead of promoting, action calibration.
Meeting abstract presented at VSS 2014