December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Accurate and automated delineation of V1-V3 boundaries by a CNN
Author Affiliations & Notes
  • Noah C. Benson
    University of Washington
  • Shaoling Chen
    New York University
  • Hiromasa Takemura
    National Institute for Physiological Sciences, Okazaki, Japan
    Graduate University for Advanced Studies, SOKENDAI, Hayama, Japan
    National Institute of Information and Communications Technology, Koganei, Japan
    Osaka University
  • Jonathan Winawer
    New York University
  • Footnotes
    Acknowledgements  This work was supported by NIH NEI award 1R01EY033628.
Journal of Vision December 2022, Vol.22, 3681. doi:https://doi.org/10.1167/jov.22.14.3681
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Noah C. Benson, Shaoling Chen, Hiromasa Takemura, Jonathan Winawer; Accurate and automated delineation of V1-V3 boundaries by a CNN. Journal of Vision 2022;22(14):3681. https://doi.org/10.1167/jov.22.14.3681.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction. Delineation of retinotopic map boundaries in human visual cortex is a time-consuming task. Automated methods based on anatomy (cortical folding pattern; Benson et al., 2014; DOI:10.1016/j.cub.2012.09.014) or a combination of anatomy and retinotopic mapping measurements (Benson & Winawer, 2018; DOI:10.7554/eLife.40224) exist, but human experts are more accurate than these methods (Benson et al., 2021; DOI:10.1101/2020.12.30.424856). Convolutional Neural Networks (CNNs) are powerful tools for image processing, and recent work has shown they can predict polar angle and eccentricity maps in individual subjects based on anatomy (Ribiero et al., 2021; DOI:10.1016/j.neuroimage.2021.118624). We hypothesize that a CNN could predict V1, V2, and V3 boundaries in individual subjects with greater accuracy than existing methods. Methods. We used the expert-drawn V1-V3 boundaries from Benson et al. (2021) of the subjects in the Human Connectome Project 7 Tesla Retinotopy Dataset (Benson et al., 2018; DOI:10.1167/18.13.23) as training (N=135) and test data (N=32). We constructed a U-Net CNN with a ResNet-18 backbone and trained it with either anatomical (curvature, thickness, surface area, and sulcal depth) or functional (retinotopic) maps as input. Results. CNN predictions out-performed other methods. The median dice coefficients between predicted and expert-drawn labels from the test dataset for the CNNs trained using anatomical and functional data were 0.77 and 0.90, respectively. In comparison, coefficients for existing methods based on anatomical or anatomical plus functional data were 0.70 and 0.72, respectively. These results demonstrate that even with a small training dataset, CNNs excel at accurately labeling visual areas on human brains in an automated fashion. This method can facilitate vision science neuroimaging experiments by making an otherwise difficult and subjective process fast, precise and reliable.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×