December 2010
Volume 10, Issue 15
Free
OSA Fall Vision Meeting Abstract  |   December 2010
Neural representation of depth from motion parallax in visual area MT
Author Affiliations
  • Greg DeAngelis
    University of Rochester
Journal of Vision December 2010, Vol.10, 24. doi:https://doi.org/10.1167/10.15.24
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Greg DeAngelis; Neural representation of depth from motion parallax in visual area MT. Journal of Vision 2010;10(15):24. https://doi.org/10.1167/10.15.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Along with binocular disparity, motion parallax provides one of the most potent quantitative cues to depth. Whereas much is known about the neural coding of binocular disparity signals in visual cortex, the neural substrates for depth perception based on motion parallax have remained unknown until recently. Using a virtual-reality motion system, we have conducted a series of experiments in which monkeys monocularly view stimuli that simulate surfaces at different depths during passive whole-body self-motion. We demonstrate that neurons in area MT combine retinal image motion (which is ambiguous with respect to depth sign on its own) with extraretinal signals to become selective for depth sign (near vs far) based on motion parallax. We further demonstrate that the critical extraretinal input for disambiguating retinal image motion is a smooth eye movement command signal, not a vestibular signal. In ongoing studies, we are recording from MT neurons in animals trained to discriminate depth from motion parallax, to demonstrate that MT neurons are coupled to perceptual decisions about depth from motion parallax.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×