Abstract
Real-world self-motion often generates considerably more complex optic flow than the idealized pattern of radial expansion. Movement along a curved path (curvilinear self-motion) contributes to this complexity because the observer’s angular velocity introduces rotation that often makes the flow no longer radial and the motion singularity no longer coincides with heading. Nevertheless, when moving along circular paths, humans are capable of perceiving both their instantaneous heading and the path curvature from optic flow (Li & Cheng, 2011; 2012). Computational models of self-motion perception tend to focus on the removal of rotation in the optic flow field (e.g. Royden, 2002; Perrone, 2018), as it yields radial optic flow that may be used to recover the observer’s instantaneous heading. However, the physiological basis for a strategy whereby neurons compensate completely for rotation has been subject to debate, at least at the level of MSTd (Orban et al., 1992; Danz et al., 2020), an area that has been implicated in self-motion perception. We develop an alternate account wherein parameters specifying the observer’s curvilinear self-motion are represented as distributed codes signaled by neurons tuned to a diverse set of motion patterns. To test this hypothesis, we used deep learning to decode temporally evolving patterns of MSTd activation from a biologically-inspired neural model that processes optic flow. We recovered accurate estimates of path curvature, heading, gaze offset, and path sign using this population decoding paradigm on novel self-motion conditions not used to fit the decoder. This was the case for optic flow generated by simulated self-motion through analytic (e.g. ground plane of dots) and realistic environments rendered by the Unreal engine, even when curvilinear self-motion changes over time. Our simulations raise the exciting possibility that MSTd may encode a broader range of self-motion parameters than previously thought.