Abstract
Catching a ball on the fly usually requires two distinct phases: a locomotion phase towards the interception area and a manual interception. An early prediction of the location and interception time would allow planning the action to overcome problems such as sensorimotor delays or the variability in the visual information associated with an observer's movement. Previous literature suggests that predictive mechanisms rely on internalized knowledge of constants in the environment, such as terrestrial gravitational acceleration or known ball size, to interpret visual information. However, relying on constants has the downside of committing consistent errors when the task-relevant variables do not match the expected ones (i.e. micro-gravity conditions). This study tested whether catching a ball in flight would be consistent with using priors of a terrestrial gravitational acceleration and the standard size of a known ball. To do so, we exposed participants (N=11) to different parabolic paths in a naturalistic virtual environment using a head-mounted display (HTC Vibe @ 90 Hz). Different conditions of gravitational acceleration (9.807 m/s^2; +- 10%) and Soccer ball size (0.22 m diameter; +-10%) were presented. We asked our participants to move as if they were to hit the ball with the head. At 90% of the flight time, the ball was occluded from view. Then, they had to use a controller to judge the time of contact with the ball as they continued to move towards the interception location. We found that different gravitational accelerations affected the trajectories traveled. However, we found no differences in the trajectories traveled between different ball sizes. Moreover, we found that gravity and ball size influence the judged contact time, which is consistent with the use of an underlying model for the final phase that encapsulates gravitational acceleration and known size.