Abstract
Previous studies suggest that the brain combines egocentric and allocentric cues to estimate the location of objects in the world. It remains unclear how the brain would combine these cues to immediately act upon objects in dynamic environments. For example, intercepting a moving object while we are moving requires us to predict the object's future location by compensating for our own displacement. In this situation, using allocentric information about the object location could improve this estimate as long as it carries reliable cues about the object's location. To test this hypothesis, we designed an interception task in virtual reality. While being moved using a vestibular motion platform and as soon as they received an auditory cue (response signal), participants had to intercept a virtual ball (target) moving in 3D with a virtual paddle that they controlled with a linear guide. The target was presented in isolation ("target only") or surrounded by two other balls (landmarks) moving along a similar trajectory. The target disappeared 250 ms before the landmarks, which were removed at the response signal. We manipulated the landmarks' reliability by varying the spatial variance of their trajectory. Both with and without self-motion, we found that increasing the landmarks' variability resulted in an increased reaching error and variability as compared to the "target only" condition, whereas the presence of "noiseless" landmarks reduced reaching error and variability compared to the "target only" condition. Our results show that while performing an interception task, the brain does integrate allocentric information with egocentric information in order to predict the object's position, even if it is at the cost of a noisier estimate. These results may be accounted for by a Bayesian model that combines predictions about the target location based on its last observation and the actual observation of the landmarks' dynamics.
Meeting abstract presented at VSS 2016