December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Topological Receptive Field Model: An enhancement to the pRF
Author Affiliations & Notes
  • Yanshuai Tu
    School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
  • Zhong-Lin Lu
    Division of Arts and Sciences, NYU Shanghai, Shanghai, China
    Center for Neural Science and Department of Psychology, New York University, New York, United States of America
    NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
  • Yalin Wang
    School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
  • Footnotes
    Acknowledgements  R01EY032125
Journal of Vision December 2022, Vol.22, 3516. doi:https://doi.org/10.1167/jov.22.14.3516
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yanshuai Tu, Zhong-Lin Lu, Yalin Wang; Topological Receptive Field Model: An enhancement to the pRF. Journal of Vision 2022;22(14):3516. https://doi.org/10.1167/jov.22.14.3516.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The population receptive field (pRF) model is the state-of-the-art retinotopic map analysis method. However, because of the relatively low signal-to-noise ratio and low spatial resolution in fMRI signals, large portions of the retinotopic maps from the voxel-wise decoding pRF solutions often violate the topological condition observed in neurophysiology, that is, nearby neurons have nearby receptive fields. It is advantageous but challenging to impose the topological condition when decoding fMRI time series. Here, we propose a topological receptive field (tRF) framework to impose both topological conditions by combining topology-preserving segmentation and topological fMRI decoding iteratively, using the Beltrami coefficient, a metric used in quasiconformal theory, to quantify topological conditions. We validated the proposed framework on both synthetic and real human retinotopy data. The synthetic data were generated using the double-sech model with two levels of fMRI noise and then decoded with both tRF and pRF. We found that tRF performed better than pRF, with a smaller average visual coordinate recovery error (2.485 vs 2.924 degrees) and no violation of the topological condition (0 vs 393 flipped triangles, out of a total of 2798). We also compared the performance of the two methods on the 12 visual areas retinotopic maps of the first three observers in the Human Connectome Project 7T Retinotopic dataset. The results also showed that the tRF provided better fits to the fMRI time series than the pRF (average RMSE=0.273 vs 0.276) and generate no topological violations (0 vs 870 flipped triangles out of a total of 19640). To our knowledge, this is the first work that enforces the topological condition in decoding retinotopic fMRI signals, and the first automatic visual area segmentation method that preserves the topology graph. The general framework can be extended to other sensory maps.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×