October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
From reaching to walking: How we build robust spatial representations for visually guided actions
Author Affiliations & Notes
  • Harun Karimpur
    Experimental Psychology, Justus Liebig University Giessen
    Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
  • Johannes Kurz
    NemoLab - Neuromotor Behavior Laboratory, Justus Liebig University Giessen
  • Katja Fiehler
    Experimental Psychology, Justus Liebig University Giessen
    Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
  • Footnotes
    Acknowledgements  International Research Training Group (IRTG) 1901 “The brain in action” funded by the German Research Foundation (DFG) and the DFG FI 1567/6-1 TAO “The active observer” awarded to KF. A dissertation fellowship of the German Academic Scholarship Foundation/Studienst. d. dt. Volkes was awarded to HK
Journal of Vision October 2020, Vol.20, 359. doi:https://doi.org/10.1167/jov.20.11.359
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Harun Karimpur, Johannes Kurz, Katja Fiehler; From reaching to walking: How we build robust spatial representations for visually guided actions. Journal of Vision 2020;20(11):359. https://doi.org/10.1167/jov.20.11.359.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When reaching for an object, we make use of spatial representations that our brain constantly builds in a fraction of a second. Decades of research established that we encode action targets not only relative to ourselves, egocentrically, but also relative to other objects in the scene, allocentrically. The vast majority of experiments relied on static scenes. In these scenes, participants had to memorize the location of objects before they were asked to reach or point to a cued object location from their memory. Little is known whether these findings can be generalized to larger dynamic environments. To test this, we created virtual reality experiments in which participants were faced with a throw-in situation, like in soccer. The task was to memorize the landing position of a ball that was thrown by an avatar. The ball was thrown either closer to the avatar or closer to the participant. After the ball had landed, we removed elements of the scene that could be used as landmarks (i.e., the avatar and the midfield line) before they reappeared for a short time, either laterally shifted or not. Participants were then prompted to take a real ball, walk to the memorized landing position and place it there. A use of landmarks should be reflected in a systematic bias of the reproduced ball landing position in direction of the landmark shift. We found that this was the case, similar to previous findings in classic reaching and pointing experiments. Moreover, we found differences in the weighting between both landmarks depending on whether the ball landed closer to the avatar or to the participant. Our findings suggest that our brain builds robust spatial representations that can be used for different sized environments, different response modes and for static as well as dynamic target objects.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×