August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Learning arbitrary visuoauditory mappings during interception of moving targets
Author Affiliations
  • Tobias Reh
    Department of Neurophysics, Philipps-Universität Marburg
  • Joost C. Dessing
    Centre for Vision Research, York University
    Canadian Action and Perception Network (CAPnet)
  • J. Douglas Crawford
    Centre for Vision Research, York University
    Canada Research Chair in Visuomotor Neuroscience
  • Frank Bremmer
    Department of Neurophysics, Philipps-Universität Marburg
Journal of Vision August 2010, Vol.10, 882. doi:https://doi.org/10.1167/10.7.882
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tobias Reh, Joost C. Dessing, J. Douglas Crawford, Frank Bremmer; Learning arbitrary visuoauditory mappings during interception of moving targets. Journal of Vision 2010;10(7):882. https://doi.org/10.1167/10.7.882.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The brain represents multisensory mappings relevant for interaction with the world. These mappings mostly involve intrinsically relevant signals, such as vision and proprioception of the hand in reaching. Here, we studied how more arbitrary maps are learned. The employed visuoauditory map coupled visual target position to the pitch of an accompanying sound. Our participants thus had to reach to intercept a moving target. The pitch of the accompanying sound was a function of target position either on the screen or relative to the fixation direction (in different subsets of participants, n = 5 for both, so far), which was also varied in the experiment. Participants sat in front of a monitor with their heads immobilized by a bite-bar. Targets appeared on a variety of positions and moved with a variety of velocities (left or right). After 500 ms the fixation point changed size and color, indicating that the reaching movement could be initiated. Our design involved a pre-test (intercepting visual targets), a learning phase (intercepting visual and audible targets, while the duration of target visibility was progressively reduced), and a testing phase (intercepting audible targets). Finger position at the moment of contact with the screen was determined using Optotrak, and fixation quality was assessed using EyeLink II. Participants in both groups could perform the task reasonably: even for the audible targets the pointing positions were significantly correlated with the target position at interception. We are currently analyzing the pointing errors within subjects as a function of fixation direction, initial target position and target velocity. This will provide a general idea of factors playing into the control of interception. More importantly, however, we will test the effect of mapping (screen versus gaze-centered) between participants, in order to examine whether the arbitrary mapping was better represented in screen-(/world-) or gaze-centered coordinates.

Reh, T. Dessing, J. C. Crawford, J. D. Bremmer, F. (2010). Learning arbitrary visuoauditory mappings during interception of moving targets [Abstract]. Journal of Vision, 10(7):882, 882a, http://www.journalofvision.org/content/10/7/882, doi:10.1167/10.7.882. [CrossRef]
Footnotes
 CIHR, NSERC-CREATE.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×