Abstract
Over the past decade, considerable progress has been made in understanding and modeling the neural mechanisms that support the perception of self-motion and object motion in humans. Building upon the neural architecture introduced in the STARS (Elder et al., 2009) and ViSTARS (Browning et al., 2009) models, Layton and Fajen have steadily developed a competitive dynamics model of MT/MST that uses optic flow to estimate heading and detect and perceive independently moving objects. The aim of this study was to systematically test the accuracy, stability, and robustness of model estimates in visually realistic environments. Our approach was to couple the model to Microsoft AirSim, a high-fidelity simulation platform built on the Unreal Engine. This allowed us to generate various scenarios involving self-motion in complex, visually realistic environments. We conducted a series of experiments to test the accuracy and robustness of model estimates in the presence of (1) globally discrepant motion by introducing blowing snow, (2) locally discrepant motion introduced by moving objects (e.g., people), (3) variations in lighting and contrast, (4) intermittent blackout, and (5) perturbations resulting from collisions that abruptly alter the direction of self-motion. The model generates accurate and stable heading estimates in static environments, and like humans, is weakly affected by locally and globally discrepant optic flow. Object motion estimation is more affected by discrepant optic flow and dependent on the location of objects relative to the focus of expansion. We will also discuss attempts to adapt the model for eventual use on-board small aerial robots, where constraints on payload and power supply encourage the use of vision-based solutions. In that context, we discuss strategies for speeding up model performance to enable self- and object-motion estimation from video in real time.
Acknowledgement: ONR N00014-18-1-2283