Abstract
The brain represents multisensory mappings relevant for interaction with the world. These mappings mostly involve intrinsically relevant signals, such as vision and proprioception of the hand in reaching. Here, we studied how more arbitrary maps are learned. The employed visuoauditory map coupled visual target position to the pitch of an accompanying sound. Our participants thus had to reach to intercept a moving target. The pitch of the accompanying sound was a function of target position either on the screen or relative to the fixation direction (in different subsets of participants, n = 5 for both, so far), which was also varied in the experiment. Participants sat in front of a monitor with their heads immobilized by a bite-bar. Targets appeared on a variety of positions and moved with a variety of velocities (left or right). After 500 ms the fixation point changed size and color, indicating that the reaching movement could be initiated. Our design involved a pre-test (intercepting visual targets), a learning phase (intercepting visual and audible targets, while the duration of target visibility was progressively reduced), and a testing phase (intercepting audible targets). Finger position at the moment of contact with the screen was determined using Optotrak, and fixation quality was assessed using EyeLink II. Participants in both groups could perform the task reasonably: even for the audible targets the pointing positions were significantly correlated with the target position at interception. We are currently analyzing the pointing errors within subjects as a function of fixation direction, initial target position and target velocity. This will provide a general idea of factors playing into the control of interception. More importantly, however, we will test the effect of mapping (screen versus gaze-centered) between participants, in order to examine whether the arbitrary mapping was better represented in screen-(/world-) or gaze-centered coordinates.