Abstract
Collective behavior in human crowds emerges from the local interactions between individual pedestrians. Previously, we found that people generate collective motion by 'following' their neighbors, specifically, by aligning their velocity vector with a weighted average of physical velocities in a neighborhood (Warren & Dachner, VSS 2017; Warren, CDPS, in press). Here we present a vision-based model of this alignment behavior. Dachner & Warren (VSS 2016, 2017) showed that a participant follows a single leader by nulling the leader's optical expansion and angular velocity, depending upon the leader's visual direction. They simulated the data using a dynamical model that takes only these optical variables as input. We now use this vision-based model to simulate human data on following a virtual crowd (from Rio et al., 2014). A participant (N=10) was instructed to 'walk with a crowd' of twelve virtual neighbors for 10m while wearing an Oculus DK1 HMD. A subset of neighbors (0, 3, 6, 9, or 12) changed speed (+/- 0.3 m/s) or direction (+/- 10 degrees) on each trial, and the participant's trajectory was recorded at 60 Hz. The data were simulated using the vision-based model and compared with results from our earlier physical model, with fixed parameters. The RMS Error of heading between model and participant was significantly lower for the vision-based model (4.1 degrees) than the physical model (4.9 degrees), t(9) = 3.35, p < .01. These results suggest that optical variables govern following in a crowd, which can explain previously observed effects of neighbor distance (Rio et al., 2014) as a consequence of the laws of perspective. Most crowd models are based on physical variables, not visual information. We conclude that the vision-based model better simulates following in a crowd, and it is this visual coupling between pedestrians that generates collective motion.
Meeting abstract presented at VSS 2018