Abstract
Humans walk to moving targets by turning onto a straight interception path that achieves a constant target-heading angle. Warren & Fajen (2004) proposed a dynamical model of interception based on first-order information about target motion, which nulls change in the target-heading angle. The model successfully reproduces human paths to constant velocity targets, as well as to targets with accelerating or curved trajectories. Accelerating and curved trajectories provide a strong test of the model, for it predicts continually curving paths that lag the target and do not anticipate its motion. These predictions were confirmed in experiments where the target's trajectory was randomized on each trial. Here we test whether people can learn to anticipate target motion when the same trajectory is repeatedly presented. Participants walk to intercept virtual targets in the VENLab, a 12m × 12m virtual environment with a head-mounted display (60 deg H × 40 deg V) and a sonic/inertial tracking system (latency 50 ms). There are four blocks of 20 repeated trials. Each block presents a target trajectory that was tested previously, including two straight trajectories with accelerations of 0. 1 m/s/s and 0.15 m/s/s, and two curved trajectories with radii of 1.5 m (v=0.9 m/s) and 2 m (v=1.3 m/s). The initial direction of target motion is randomly leftward or rightward on each trial. The model predicts consistently lagging paths across trials in a block, with no learning to anticipate target motion. If learning occurs we expect to see straighter, more direct paths to intercept the target.
NIH EY10923, NSF LIS IRI-9720327