September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Encoding and decoding in neural populations with non-Gaussian tuning: the example of 3D motion tuning in MT
Author Affiliations
  • Kathryn Bonnen
    Center for Perceptual Systems, University of Texas at Austin
    Neuroscience, College of Natural Sciences, University of Texas at Austin
  • Thaddeus Czuba
    Center for Perceptual Systems, University of Texas at Austin
    Psychology, College of Liberal Arts, University of Texas at Austin
  • Adam Kohn
    Neuroscience, Albert Einstein College of Medicine
  • Lawrence Cormack
    Center for Perceptual Systems, University of Texas at Austin
    Psychology, College of Liberal Arts, University of Texas at Austin
  • Alexander Huk
    Center for Perceptual Systems, University of Texas at Austin
    Psychology, College of Liberal Arts, University of Texas at Austin
Journal of Vision August 2017, Vol.17, 409. doi:https://doi.org/10.1167/17.10.409
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kathryn Bonnen, Thaddeus Czuba, Adam Kohn, Lawrence Cormack, Alexander Huk; Encoding and decoding in neural populations with non-Gaussian tuning: the example of 3D motion tuning in MT. Journal of Vision 2017;17(10):409. https://doi.org/10.1167/17.10.409.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

From visual orientation in primate V1 to wind velocity in cricket cercal cells, neuronal tuning almost always follows a bell-shaped function. While this is a comforting empirical regularity, here we report that a basic visual property (3D motion direction) is encoded with tuning that is staggeringly non-Gaussian -- characterized by distinct plateaus separated by steep cliffs, i.e. `terraces'. To understand the source and implications of the unconventional tuning form, we first examined how this "terraced" encoding scheme might arise from tuning to basic 2D motion cues. We found that canonical forms of frontoparallel velocity tuning interact with the geometry of 3D space and binocularity to yield these 3D direction tuning shapes. The resulting encoding model takes MT's canonical log-gaussian tuning to monocular velocities, adds the two monocular responses, and then performs the requisite trigonometric transformations to extract 3D direction from the differential velocities in the two eyes. 3D direction tuning was predicted well (r>.5 for 75% of neurons) by this simple additive, trigonometric model. Then we considered how 3D direction can be decoded from such tuning curves. Modeling estimation and discrimination of 3D directions revealed three surprising insights: a) Ocular dominance likely underlies coarse direction discrimination, rather than differential velocity tuning across the eyes; b) Estimation of 3D direction is more precise for motions roughly towards/away than motions closer to frontoparallel; c) If 3D motion perception relies on the MT tuning, performance on 3D motion direction discrimination tasks should change dramatically as a function of viewing distance. In summary, our model of 3D direction encoding in MT captures the drastically non-Gaussian tuning curves observed empirically, and further examines the consequences of these for decoding and perception. This framework should generalize to encoding/decoding of other environmentally-realistic features; e.g. how retinal orientation relates to 3D slant and tilt.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×