September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Identifying and localizing retinal features that predict human contrast sensitivity via deep learning
Author Affiliations & Notes
  • MiYoung Kwon
    Northeastern University
  • Rong Liu
  • Footnotes
    Acknowledgements  This work was supported by NIH/NEI Grant R01EY027857 and Research to Prevent Blindness (RPB)/Lions Clubs International Foundation (LICF) low vision research award.
Journal of Vision September 2021, Vol.21, 2615. doi:https://doi.org/10.1167/jov.21.9.2615
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      MiYoung Kwon, Rong Liu; Identifying and localizing retinal features that predict human contrast sensitivity via deep learning. Journal of Vision 2021;21(9):2615. https://doi.org/10.1167/jov.21.9.2615.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Luminance contrast, the difference in intensity between light and dark regions of an image, is a fundamental building block of human pattern vision. While it is well known that contrast information is first encoded by the center and surround structure of retinal ganglion cell (RGC) receptive fields, relatively little is known about the quantitative relationship between RGCs and psychophysically measured human contrast sensitivity. Here we aimed to predict human contrast sensitivity directly from structural retinal imaging data and to localize retinal features closely linked to contrast sensitivity. Data were collected from a total of 262 eyes including both normal healthy eyes and glaucomatous eyes. For each eye, we obtained cross-sectional retinal images centered on the fovea via Spectral-Domain Optical Coherence Tomography (SD-OCT) and Pelli-Robson contrast sensitivity data. We adopted a deep residual neural network (ResNet) trained on OCT structural images to predict contrast sensitivity. We evaluated the prediction performance of the network. We also extracted attention maps representing the critical features learned by the network for the output prediction. Our results showed that the network produced high prediction performance with the mean square error and the mean absolute error of 0.01 and 0.09, respectively. Importantly, our attention map analysis further revealed that the network utilized the structural information extracted from the thickness features of the Ganglion Cell Layer containing RGC bodies and the Inner Plexiform Layer containing RGC dendritic structures. Particularly, the structural information within the perifoveal region of the retina was most critical to the output prediction, consistent with the notion that RGC receptive fields responsible for processing foveal visual input, are laterally displaced. Our work demonstrates that psychophysically measured human contrast sensitivity can be reliably predicted from retinal structural data alone. Our findings further highlight a determining role of RGC sampling density for human contrast sensitivity.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×