October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Self-regulating neural mechanisms for self-motion estimation from optic flow
Author Affiliations & Notes
  • Scott Steinmetz
    Rensselaer Polytechnic Institute
  • Oliver Layton
    Colby College
  • Nathaniel Powell
    Rensselaer Polytechnic Institute
  • Brett Fajen
    Rensselaer Polytechnic Institute
  • Footnotes
    Acknowledgements  ONR N000141812283
Journal of Vision October 2020, Vol.20, 1212. doi:https://doi.org/10.1167/jov.20.11.1212
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Scott Steinmetz, Oliver Layton, Nathaniel Powell, Brett Fajen; Self-regulating neural mechanisms for self-motion estimation from optic flow. Journal of Vision 2020;20(11):1212. doi: https://doi.org/10.1167/jov.20.11.1212.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Humans are capable of accurately perceiving self-motion direction in many different environments, ranging from the real world to virtual environments to minimal random-dot scenes commonly used in psychophysical experiments. Neural models of heading perception are less adaptive, typically relying on parameters tuned to accommodate a narrow range of experimental conditions. In the present study, we build upon the competitive dynamics model of primate brain areas MT and MST (Layton & Fajen, 2016) so that it generates robust heading estimates from optic flow in a broad range of scenes, while automatically regulating key parameters that previously needed to be set by hand. In model area MT, speed-cell tuning curves needed to be manually configured to properly encode the range of optic flow speeds, which can vary widely with changes in environmental structure, self-motion speed, and eyeheight. We adapted the principles of efficient sensory encoding (Simoncelli & Ganguli, 2014) with a temporal component that allows speed cells to dynamically adjust to the distribution of optic flow speeds recently detected by the observer. Manual parameter selection was also required in model area MSTd to properly modulate competition between cells, which balances the stability of heading perception against responsiveness to true changes in heading. One way the visual system could achieve such flexibility across environments is via neural mechanisms that self-regulate the feedback and competition in MSTd. We implemented such a mechanism, using a weighted combination of template cell activities with differing decay rates and competitive dynamics to regulate the recurrent signal. Through model simulations using video from real-world and virtual scenes, we demonstrate how these changes enable flexible adaptation across a range of environments with accuracy similar to that achieved with manually selected parameters.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.