September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
An integrated neural model of robust self-motion and object motion perception in visually realistic environments
Author Affiliations & Notes
  • Scott T Steinmetz
    Cognitive Science Department, Rensselaer Polytechnic Institute
  • Oliver W Layton
    Department of Computer Science, Colby College
  • N. Andrew Browning
    Perceptual Autonomy
  • Nathaniel V Powell
    Cognitive Science Department, Rensselaer Polytechnic Institute
  • Brett R Fajen
    Cognitive Science Department, Rensselaer Polytechnic Institute
Journal of Vision September 2019, Vol.19, 294a. doi:https://doi.org/10.1167/19.10.294a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Scott T Steinmetz, Oliver W Layton, N. Andrew Browning, Nathaniel V Powell, Brett R Fajen; An integrated neural model of robust self-motion and object motion perception in visually realistic environments. Journal of Vision 2019;19(10):294a. https://doi.org/10.1167/19.10.294a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Over the past decade, considerable progress has been made in understanding and modeling the neural mechanisms that support the perception of self-motion and object motion in humans. Building upon the neural architecture introduced in the STARS (Elder et al., 2009) and ViSTARS (Browning et al., 2009) models, Layton and Fajen have steadily developed a competitive dynamics model of MT/MST that uses optic flow to estimate heading and detect and perceive independently moving objects. The aim of this study was to systematically test the accuracy, stability, and robustness of model estimates in visually realistic environments. Our approach was to couple the model to Microsoft AirSim, a high-fidelity simulation platform built on the Unreal Engine. This allowed us to generate various scenarios involving self-motion in complex, visually realistic environments. We conducted a series of experiments to test the accuracy and robustness of model estimates in the presence of (1) globally discrepant motion by introducing blowing snow, (2) locally discrepant motion introduced by moving objects (e.g., people), (3) variations in lighting and contrast, (4) intermittent blackout, and (5) perturbations resulting from collisions that abruptly alter the direction of self-motion. The model generates accurate and stable heading estimates in static environments, and like humans, is weakly affected by locally and globally discrepant optic flow. Object motion estimation is more affected by discrepant optic flow and dependent on the location of objects relative to the focus of expansion. We will also discuss attempts to adapt the model for eventual use on-board small aerial robots, where constraints on payload and power supply encourage the use of vision-based solutions. In that context, we discuss strategies for speeding up model performance to enable self- and object-motion estimation from video in real time.

Acknowledgement: ONR N00014-18-1-2283 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×