Abstract
In previous research, we developed an experiment-driven model of collective motion in human crowds (Warren, CDPS, in press). The behavioral dynamics model combines a local 'alignment' interaction, in which a pedestrian matches the speed and heading of a neighbor (Rio, Rhea, & Warren, 2014) with a neighborhood model, which computes a weighted average of multiple neighbors, with weights that decay exponentially with distance out to 4m (Warren & Dachner, VSS 2017; cf. Cuker & Smale, 2007). In addition, we found that the weight decreases more gradually with the distance to the nearest neighbor out to 11m (Wirth, Warren (& Richmond), VSS 2016), forming a larger doughnut-shaped neighborhood. Here we explore the model in multi-agent simulations, to determine the conditions under which it generates collective motion and to compare the simple-radius and doughnut neighborhoods. 30 interacting agents, with human parameters, were simulated on each 20s run, with synchronous updating. Their initial positions on a 5x6 grid were jittered, and initial conditions were parametrically varied: interpersonal distance (IPD=1-10m), heading range (±10° to ±90°), and speed range (±0.1 to ±0.9 m/s). There were 20 runs per condition, and the SD of final heading and speed were measured. The model converges to coherent motion over a wide range of initial headings and speeds, but less so as IPD increases. In addition, the number k of clusters of agents tends to increase with variation in initial conditions. Notably, the doughnut model converges over a larger range of conditions than the simple-radius model, providing a robust alternative to a 'topological' neighborhood that is not distance-dependent (Wirth & Warren, VSS 2018). We are currently comparing this physical model with a vision-based model driven by optical variables (Dachner & Warren, VSS 2017, 2018). Thus, the doughnut model generates collective motion that is robust to variation in initial conditions.
Meeting abstract presented at VSS 2018