February 2022
Volume 22, Issue 3
Open Access
Optica Fall Vision Meeting Abstract  |   February 2022
Contributed Session I: A generative adversarial deep neural network to translate between ocular imaging modalities while maintaining anatomical fidelity
Author Affiliations
  • Sharif Amit Kamran
    Department of Computer Science and Engineering, University of Nevada, Reno, USA.
  • Khondker Fariha Hossain
    Department of Computer Science and Engineering, University of Nevada, Reno, USA.
  • Alireza Tavakkoli
    Department of Computer Science and Engineering, University of Nevada, Reno, USA.
  • Joshua Ong
    School of Medicine, University of Pittsburg, Pittsburg, PA, USA.
  • Stewart Lee Zuckerbrod
    Houston Eye Associates, Houston, TX
Journal of Vision February 2022, Vol.22, 3. doi:https://doi.org/10.1167/jov.22.3.3
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sharif Amit Kamran, Khondker Fariha Hossain, Alireza Tavakkoli, Joshua Ong, Stewart Lee Zuckerbrod; Contributed Session I: A generative adversarial deep neural network to translate between ocular imaging modalities while maintaining anatomical fidelity. Journal of Vision 2022;22(3):3. https://doi.org/10.1167/jov.22.3.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Certain ocular imaging procedures such as fluoresceine angiography (FA) are invasive with potential for adverse side effects, while others such as funduscopy are non-invasive and safe for the patient. However, effective diagnosis of ophthalmic conditions requires multiple modalities of data and a potential need for invasive procedures. In this study, we propose a novel conditional generative adversarial network (GAN) capable of simultaneously synthesizing FA images from fundus photographs while predicting retinal degeneration. The proposed system addresses the problem of imaging retinal vasculature in a non-invasive manner while utilizing the cross-modality images to predict the existence of retinal abnormalities. One of the major contributions of the proposed work is the introduction of a semi-supervised approach in training the network to overcome the problem of data dependency from which traditional deep learning architectures suffer. Our experiments confirm that the proposed architecture outperforms state-of-the-art generative networks for image synthesis across imaging modalities. In particular, we show that there is a statistically significant difference (p<.0001) between our method and the state-of-the-art in structural accuracy of the translated images. Moreover, our results confirm that the proposed vision transformers generalize quite well on out-of-distribution data sets for retinal disease prediction, a problem faced by many traditional deep networks.

Footnotes
 Funding: This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. 80NSSC20K1831.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×