Abstract
Many skilled locomotor tasks involve steering through cluttered environments at high speeds and moving smoothly from one waypoint to the next while avoiding obstacles. If actors treated each waypoint individually, they may be forced to make large trajectory adjustments, collide with obstacles, or miss waypoints altogether. Hence, performing such tasks with skill would seem to require actors to adapt their trajectory relative to the most immediate object in anticipation of future objects. To date, there have been relatively few empirical studies of such behavior and the evidence from the studies that exist is somewhat murky. On the one hand, it is well established that humans shift gaze to future waypoints before reaching the upcoming waypoint and such behavior is associated with smoother steering (Wilkie, Wann, & Allison, 2008). On the other hand, anticipatory steering behavior is not consistent across subjects and improvements in steering smoothness are marginal. The present study was designed to address the need for a clearer understanding of how and in what conditions humans adapt their movements in anticipation of future goals. Subjects performed a simulated drone flying task in a custom-designed forest-like virtual environment that was presented on a monitor. Eye movements were tracked using a Pupil Core headset. Subjects were instructed to steer through a series of gates while the distance at which gates first became visible (i.e., look-ahead distance) was manipulated between trials. We found that performance degraded when subjects were unable to see the next gate until after they passed through the previous gate. The findings suggest that steering performance does indeed improve when actors can see more than one waypoint at a time. Follow up experiments explore the conditions in which such anticipatory steering behavior is exhibited. The findings inform the development of control strategies for steering through multiple waypoints.