June 2007
Volume 7, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Learning in image-guided reaching changes the representation-to-action mapping
Author Affiliations
  • Bing Wu
    Department of Psychology, Carnegie Mellon University, and Robotics Institute, Carnegie Mellon University
  • Roberta Klatzky
    Department of Psychology, Carnegie Mellon University, and Human-Computer Interaction Institute, Carnegie Mellon University
  • Damion Shelton
    Robotics Institute, Carnegie Mellon University
  • George Stetten
    Robotics Institute, Carnegie Mellon University, Human-Computer Interaction Institute, Carnegie Mellon University, and Department of BioEngineering, University of Pittsburgh
Journal of Vision June 2007, Vol.7, 168. doi:https://doi.org/10.1167/7.9.168
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bing Wu, Roberta Klatzky, Damion Shelton, George Stetten; Learning in image-guided reaching changes the representation-to-action mapping. Journal of Vision 2007;7(9):168. https://doi.org/10.1167/7.9.168.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In training trials, subjects reached with a needle to a hidden target in near space using ultrasound guidance. Systematic errors were initially introduced by displacing the ultrasound sensor in depth, relative to the reaching point. Subjects learned to accommodate to the displacement with practice. The improvement, in theory, could be accomplished by (i) adjustment of target-location representation, (ii) learning specific motor responses, or (iii) establishment of a new representation-to-motor mapping. Experiment 1 assessed the first hypothesis. Using a visual matching paradigm, subjects' judgments of target location were measured before and after the training. Although reaching accuracy improved with training, there was no corresponding change in visual matching. Moreover, training did not generalize fully to a new reaching point, as would be expected if learning corresponded entirely to refinement of target location. The results thus gave little support to the hypothesis. In Experiment 2, using a task of aligning a hand-held stylus with a visible line, subjects' motor responses were measured before and after the training. Again needle guidance improved with training, but there was no corresponding change in hand alignment, which contradicts the motor-learning hypothesis. Experiment 3 assessed whether subjects learned a general mapping from representation to action. After training as before, the ultrasound sensor was moved up to the same plane as the reaching entry point, causing a shift in the target position on the image. Generalized re-mapping predicts that the previous motor compensation should be imposed on the new location representation, producing a corresponding error. As predicted, the initial error in subjects' insertion response was very similar to the learned correction during the training. The remapping that resulted from cognitive correction is similar to perceptual effects of prism goggles, where feedback from reaching to one spatial location generalizes to others with corresponding distortion magnitude.

Wu, B. Klatzky, R. Shelton, D. Stetten, G. (2007). Learning in image-guided reaching changes the representation-to-action mapping [Abstract]. Journal of Vision, 7(9):168, 168a, http://journalofvision.org/7/9/168/, doi:10.1167/7.9.168. [CrossRef]
Footnotes
 Supported by grants from NIH (#R01-EB000860) and NSF (0308096)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×