Journal of Vision Cover Image for Volume 21, Issue 9
September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Quantitative Characterization of the Human Retinotopic Map Based on Quasiconformal Mapping
Author Affiliations & Notes
  • Duyan Ta
    School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
  • Yanshuai Tu
    School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
  • Zhong-Lin Lu
    Division of Arts and Sciences, New York University Shanghai, Shanghai, China
    Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
    NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
  • Yalin Wang
    School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
  • Footnotes
    Acknowledgements  This work was partially supported by National Science Foundation (DMS-1413417625and DMS-1412722)
Journal of Vision September 2021, Vol.21, 2342. doi:https://doi.org/10.1167/jov.21.9.2342
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Duyan Ta, Yanshuai Tu, Zhong-Lin Lu, Yalin Wang; Quantitative Characterization of the Human Retinotopic Map Based on Quasiconformal Mapping. Journal of Vision 2021;21(9):2342. https://doi.org/10.1167/jov.21.9.2342.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual perception and cognition involve a cascade of geometric transformations from the retina to high-level cortical areas. Yet, our perception is mostly veridical and is largely invariant across individuals. The transformation from the retina to the primary visual cortex (V1) has been modeled using conformal and quasiconformal maps with the latter being a better fit to empirical retinotopic maps. However, we have not actually quantified the transformation to understand their effects in the cascade. We developed a new quantification framework for retinotopic maps based on computational conformal geometry and quasiconformal Teichmüller space theory. There are three key components: (1) pre-processing retinotopic maps to ensure that they preserve topology despite the low spatial resolution and low signal-to-noise ratio of retinotopy data, (2) quantifying the mapping between the retina and V1 (conformal or not conformal) using Beltrami coefficients, (3) developing mathematical and numerical methods to ensure the quantification is completely invertible (forward and backward transformations between visual areas). The result was a ''Beltrami coefficient map'' (BCM) that allowed us to measure and compute the forward and backward transformations between the retina and V1. We applied the new framework on the V1 retinotopic maps from the Human Connectome Project (n=181), the largest state of the art retinotopy dataset currently available. The average measurement of ''distortion'' on the left and right BCM had values of 0.310±0.110 and 0.312±0.105, respectively (range: 0 ≤ x < 1, with 0 being most conformal). We showed that the transformation from the retina to V1 is quasiconformal. Does it mean that later stages of visual processing can ''correct'' the distortions to generate veridical perception? How do visual or neural disorders affect the transformations? Future applications of this mathematical framework to all the visual areas may shed some new light on these questions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×