Abstract
Humans are capable of accurately perceiving self-motion direction in many different environments, ranging from the real world to virtual environments to minimal random-dot scenes commonly used in psychophysical experiments. Neural models of heading perception are less adaptive, typically relying on parameters tuned to accommodate a narrow range of experimental conditions. In the present study, we build upon the competitive dynamics model of primate brain areas MT and MST (Layton & Fajen, 2016) so that it generates robust heading estimates from optic flow in a broad range of scenes, while automatically regulating key parameters that previously needed to be set by hand. In model area MT, speed-cell tuning curves needed to be manually configured to properly encode the range of optic flow speeds, which can vary widely with changes in environmental structure, self-motion speed, and eyeheight. We adapted the principles of efficient sensory encoding (Simoncelli & Ganguli, 2014) with a temporal component that allows speed cells to dynamically adjust to the distribution of optic flow speeds recently detected by the observer. Manual parameter selection was also required in model area MSTd to properly modulate competition between cells, which balances the stability of heading perception against responsiveness to true changes in heading. One way the visual system could achieve such flexibility across environments is via neural mechanisms that self-regulate the feedback and competition in MSTd. We implemented such a mechanism, using a weighted combination of template cell activities with differing decay rates and competitive dynamics to regulate the recurrent signal. Through model simulations using video from real-world and virtual scenes, we demonstrate how these changes enable flexible adaptation across a range of environments with accuracy similar to that achieved with manually selected parameters.