Abstract
We have developed a dynamical systems model of visually-guided locomotor behavior in which the directions of goals and obstacles behave like attractors and repellers of the agent's heading (Fajen & Warren, 2003; submitted; Cohen, Bruggeman, & Warren, VSS, 2005). The model has separate components for (a) steering to a stationary goal, (b) avoiding a stationary obstacle, (c) intercepting a moving target, and (d) avoiding a moving obstacle. The present question is whether the model is generative: Can these components be linearly combined (with fixed parameters) to predict human locomotor paths in more complex environments? This talk presents an overview of recent experiments on human walking with combinations of stationary/moving targets and stationary/moving obstacles. The studies were performed in the Virtual Environment Navigation lab, a 12m x 12m ambulatory virtual environment with a head mounted display (60° H x 40° V) and a hybrid sonic/inertial tracking system (latency 50–70 ms). The model closely predicts both the quantitative paths and the qualitative switching points between paths that were observed in human behavior. The model thus appears to scale to increasingly complex dynamic environments with no free parameters. Apparently simpler steering models (Rushton, Wen, & Allison, 2002; Wilkie & Wann, 2003) do not simulate human paths as accurately without additional terms. Research frontiers include modeling complex natural behavior such as football, creating a virtual world in which people interact with model-driven avatars, and agent-based simulations of crowd behavior.