Abstract
In order to know how far objects are away from us, the brain might use the coordinates contained in plans of movements that would reach the object. We investigated whether adaptation of pointing movements changes the visual perception of depth. We presented three-dimensional stimuli in a virtual environment using head mounted displays and controllers to track pointing movements. The pointing targets – small red spheres – were presented at the eye level at two distances on two sides of the observers’ midline. In adaptation trials, observers were asked to point at one of four possible targets as accurately as possible. After each pointing movement, visual feedback was provided about their terminal hand position. When targets were shown on the left side, pointing feedback represented the observers tracked pointing, whereas on the right side it was distorted. In separate sessions, the distorted feedback was shown either further in depth or closer to the observer’s body. Therefore, we obtained a split-field that contained an adapted region on the right and a non-adapted on the left. Aftereffects indicated that observers adapted to the distortion selectively in the right field. In the visual task observers compared the size of two flashed spheres, one shown in the adapted and one in the non-adapted region. We found that spheres presented in the adapted region appeared smaller after we adapted subjects to point further into depth, as if visual space had been shifted in direction of the motor adaptation of pointing movements. Consistently, spheres appeared bigger when we reversed the direction of the distortion. These results show that depth perception is shaped by motor adaptation and support the idea that depth perception is constituted by the coordinates contained in three-dimensional pointing movements.
Acknowledgement: ERC Grant