Journal of Vision Cover Image for Volume 18, Issue 10
September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
A vision-based model of following in a human crowd
Author Affiliations
  • Gregory Dachner
    Brown University
  • William Warren
    Brown University
Journal of Vision September 2018, Vol.18, 1037. doi:https://doi.org/10.1167/18.10.1037
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gregory Dachner, William Warren; A vision-based model of following in a human crowd. Journal of Vision 2018;18(10):1037. https://doi.org/10.1167/18.10.1037.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Collective behavior in human crowds emerges from the local interactions between individual pedestrians. Previously, we found that people generate collective motion by 'following' their neighbors, specifically, by aligning their velocity vector with a weighted average of physical velocities in a neighborhood (Warren & Dachner, VSS 2017; Warren, CDPS, in press). Here we present a vision-based model of this alignment behavior. Dachner & Warren (VSS 2016, 2017) showed that a participant follows a single leader by nulling the leader's optical expansion and angular velocity, depending upon the leader's visual direction. They simulated the data using a dynamical model that takes only these optical variables as input. We now use this vision-based model to simulate human data on following a virtual crowd (from Rio et al., 2014). A participant (N=10) was instructed to 'walk with a crowd' of twelve virtual neighbors for 10m while wearing an Oculus DK1 HMD. A subset of neighbors (0, 3, 6, 9, or 12) changed speed (+/- 0.3 m/s) or direction (+/- 10 degrees) on each trial, and the participant's trajectory was recorded at 60 Hz. The data were simulated using the vision-based model and compared with results from our earlier physical model, with fixed parameters. The RMS Error of heading between model and participant was significantly lower for the vision-based model (4.1 degrees) than the physical model (4.9 degrees), t(9) = 3.35, p < .01. These results suggest that optical variables govern following in a crowd, which can explain previously observed effects of neighbor distance (Rio et al., 2014) as a consequence of the laws of perspective. Most crowd models are based on physical variables, not visual information. We conclude that the vision-based model better simulates following in a crowd, and it is this visual coupling between pedestrians that generates collective motion.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×