Abstract
There is considerable evidence for internal models of the body's dynamics for control of movement. However, internal models of the environment are less well established. Especially under time constraints, it is advantageous to utilize complex properties of the environment extracted from visual cues that allow prediction of future states of the world and movement planning in anticipation of those events.
Subjects were immersed in a virtual scene and hit virtual balls directed towards them with a table tennis paddle, receiving vibrotactile feedback when hitting, and audible feedback from the bounce. This setup allows to control the trajectories, the physical properties of the ball and world, and their uncertainties. There were four conditions: 1. the ball had a controlled spatial distribution of bounce points; 2. the bounce point distribution varied with ball color; 3, subjects were rewarded differentially depending on where the ball bounced; 4. the physical properties of the balls and environment varied on each trial, including physically impossible trajectories.
We recorded eye, head, and hand movements while subjects caught balls thrown with a bounce. Subjects fixated the initial position of the ball, then saccaded to the anticipated bounce point, and then pursued the ball until it is close to the rendered position of the paddle. Fixation distributions varied with ball color, and balls were more difficult to catch in the condition with physically impossible balls. The data further suggest that observers maintain an internal model of the dynamic properties of the world, and adjust this model when errors occur.
NIH grants EY05729 and RR09283