Abstract
Despite years of studying collision avoidance in robotics, computer animation, and traffic engineering, there is still no biologically plausible model of how a human pedestrian avoids a moving obstacle. Most models are based on the physical 3D position and velocity of the object as input, rather than the visual information available to a moving observer. As a pedestrian approaches a moving obstacle, a collision is specified by a constant bearing direction together with optical expansion of the obstacle. We developed a series of dynamical models of collision avoidance that use changes in bearing direction, visual angle, or distance, and the participant’s preferred walking speed, to modulate control laws for heading and speed. We fit the models to human data and attempted to predict route selection (ahead or behind the obstacle) and the locomotor trajectory. The data came from a VR experiment in which a participant (N=15) walked to a goal at 7m while avoiding an obstacle moving on a linear trajectory at different angles (±70°, ±90°, ±100° to the participant’s path) and speeds (0.4, 0.6, 0.8 m/s). Model parameters were fit to all data. Error was defined as the mean distance between the predicted and actual human positions. Behavioral Model 1 takes the derivative of bearing direction and distance as inputs; Visual Model 4 takes the derivatives of bearing direction and visual angle as inputs. The mean error of Model 4 (M=0.184m, SD=0.169) was significantly smaller than that of Model 1 (M=0.195m, SD=0.172), t(1004)=6.89, p < 0.001. Route selection accuracy was comparable (Model 4, 84.0% correct; Model 1, 83.6% correct). Together, the results show that a visual model based on optical information can capture collision avoidance at the level of individual trajectories better than a behavioral model based on physical variables.