September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Visual Models of Collision Avoidance with Moving Obstacles
Author Affiliations & Notes
  • Jiuyang Bai
    Brown University
  • William Warren
    Brown University
  • Footnotes
    Acknowledgements  Funding: NIH R01EY029745
Journal of Vision September 2021, Vol.21, 2596. doi:https://doi.org/10.1167/jov.21.9.2596
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jiuyang Bai, William Warren; Visual Models of Collision Avoidance with Moving Obstacles. Journal of Vision 2021;21(9):2596. https://doi.org/10.1167/jov.21.9.2596.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Despite years of studying collision avoidance in robotics, computer animation, and traffic engineering, there is still no biologically plausible model of how a human pedestrian avoids a moving obstacle. Most models are based on the physical 3D position and velocity of the object as input, rather than the visual information available to a moving observer. As a pedestrian approaches a moving obstacle, a collision is specified by a constant bearing direction together with optical expansion of the obstacle. We developed a series of dynamical models of collision avoidance that use changes in bearing direction, visual angle, or distance, and the participant’s preferred walking speed, to modulate control laws for heading and speed. We fit the models to human data and attempted to predict route selection (ahead or behind the obstacle) and the locomotor trajectory. The data came from a VR experiment in which a participant (N=15) walked to a goal at 7m while avoiding an obstacle moving on a linear trajectory at different angles (±70°, ±90°, ±100° to the participant’s path) and speeds (0.4, 0.6, 0.8 m/s). Model parameters were fit to all data. Error was defined as the mean distance between the predicted and actual human positions. Behavioral Model 1 takes the derivative of bearing direction and distance as inputs; Visual Model 4 takes the derivatives of bearing direction and visual angle as inputs. The mean error of Model 4 (M=0.184m, SD=0.169) was significantly smaller than that of Model 1 (M=0.195m, SD=0.172), t(1004)=6.89, p < 0.001. Route selection accuracy was comparable (Model 4, 84.0% correct; Model 1, 83.6% correct). Together, the results show that a visual model based on optical information can capture collision avoidance at the level of individual trajectories better than a behavioral model based on physical variables.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×