August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Allocentric and egocentric contribution to manual interception by moving actors.
Author Affiliations
  • Florian Perdreau
    Donders Institute for Brain, Cognition & Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
  • Robert van Beers
    Donders Institute for Brain, Cognition & Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
  • Pieter Medendorp
    Donders Institute for Brain, Cognition & Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
Journal of Vision September 2016, Vol.16, 1199. doi:10.1167/16.12.1199
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Florian Perdreau, Robert van Beers, Pieter Medendorp; Allocentric and egocentric contribution to manual interception by moving actors.. Journal of Vision 2016;16(12):1199. doi: 10.1167/16.12.1199.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies suggest that the brain combines egocentric and allocentric cues to estimate the location of objects in the world. It remains unclear how the brain would combine these cues to immediately act upon objects in dynamic environments. For example, intercepting a moving object while we are moving requires us to predict the object's future location by compensating for our own displacement. In this situation, using allocentric information about the object location could improve this estimate as long as it carries reliable cues about the object's location. To test this hypothesis, we designed an interception task in virtual reality. While being moved using a vestibular motion platform and as soon as they received an auditory cue (response signal), participants had to intercept a virtual ball (target) moving in 3D with a virtual paddle that they controlled with a linear guide. The target was presented in isolation ("target only") or surrounded by two other balls (landmarks) moving along a similar trajectory. The target disappeared 250 ms before the landmarks, which were removed at the response signal. We manipulated the landmarks' reliability by varying the spatial variance of their trajectory. Both with and without self-motion, we found that increasing the landmarks' variability resulted in an increased reaching error and variability as compared to the "target only" condition, whereas the presence of "noiseless" landmarks reduced reaching error and variability compared to the "target only" condition. Our results show that while performing an interception task, the brain does integrate allocentric information with egocentric information in order to predict the object's position, even if it is at the cost of a noisier estimate. These results may be accounted for by a Bayesian model that combines predictions about the target location based on its last observation and the actual observation of the landmarks' dynamics.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×