Abstract
Along with binocular disparity, motion parallax provides one of the most potent quantitative cues to depth. Whereas much is known about the neural coding of binocular disparity signals in visual cortex, the neural substrates for depth perception based on motion parallax have remained unknown until recently. Using a virtual-reality motion system, we have conducted a series of experiments in which monkeys monocularly view stimuli that simulate surfaces at different depths during passive whole-body self-motion. We demonstrate that neurons in area MT combine retinal image motion (which is ambiguous with respect to depth sign on its own) with extraretinal signals to become selective for depth sign (near vs far) based on motion parallax. We further demonstrate that the critical extraretinal input for disambiguating retinal image motion is a smooth eye movement command signal, not a vestibular signal. In ongoing studies, we are recording from MT neurons in animals trained to discriminate depth from motion parallax, to demonstrate that MT neurons are coupled to perceptual decisions about depth from motion parallax.