December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Population coding of curvilinear self-motion in a neural model of MSTd
Author Affiliations & Notes
  • Oliver Layton
    Colby College
    Rensselaer Polytechnic Institute
  • Scott Steinmetz
    Rensselaer Polytechnic Institute
  • Nathaniel Powell
    Rensselaer Polytechnic Institute
  • Brett Fajen
    Rensselaer Polytechnic Institute
  • Footnotes
    Acknowledgements  ONR N00014-18-1-2283
Journal of Vision December 2022, Vol.22, 4165. doi:https://doi.org/10.1167/jov.22.14.4165
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Oliver Layton, Scott Steinmetz, Nathaniel Powell, Brett Fajen; Population coding of curvilinear self-motion in a neural model of MSTd. Journal of Vision 2022;22(14):4165. https://doi.org/10.1167/jov.22.14.4165.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Real-world self-motion often generates considerably more complex optic flow than the idealized pattern of radial expansion. Movement along a curved path (curvilinear self-motion) contributes to this complexity because the observer’s angular velocity introduces rotation that often makes the flow no longer radial and the motion singularity no longer coincides with heading. Nevertheless, when moving along circular paths, humans are capable of perceiving both their instantaneous heading and the path curvature from optic flow (Li & Cheng, 2011; 2012). Computational models of self-motion perception tend to focus on the removal of rotation in the optic flow field (e.g. Royden, 2002; Perrone, 2018), as it yields radial optic flow that may be used to recover the observer’s instantaneous heading. However, the physiological basis for a strategy whereby neurons compensate completely for rotation has been subject to debate, at least at the level of MSTd (Orban et al., 1992; Danz et al., 2020), an area that has been implicated in self-motion perception. We develop an alternate account wherein parameters specifying the observer’s curvilinear self-motion are represented as distributed codes signaled by neurons tuned to a diverse set of motion patterns. To test this hypothesis, we used deep learning to decode temporally evolving patterns of MSTd activation from a biologically-inspired neural model that processes optic flow. We recovered accurate estimates of path curvature, heading, gaze offset, and path sign using this population decoding paradigm on novel self-motion conditions not used to fit the decoder. This was the case for optic flow generated by simulated self-motion through analytic (e.g. ground plane of dots) and realistic environments rendered by the Unreal engine, even when curvilinear self-motion changes over time. Our simulations raise the exciting possibility that MSTd may encode a broader range of self-motion parameters than previously thought.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×