August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Perception of retinal images: Can artificial intelligence help us discover new diagnostic features?
Author Affiliations & Notes
  • Lei Yuan
    University of British Columbia
  • Gulcenur Ozturan
    University of British Columbia
  • Ipek Oruc
    University of British Columbia
  • Footnotes
    Acknowledgements  This work was supported by a Natural Sciences and Engineering Research Council of Canada Discovery Grant RGPIN-2019-05554 (IO) and an Accelerator Supplement RGPAS-2019-00026 (IO), UBC Data Science Institute award (IO & GO), Faculty of Medicine SSRP Award (LY), DMCBH Kickstart award (IO).
Journal of Vision August 2023, Vol.23, 5162. doi:https://doi.org/10.1167/jov.23.9.5162
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lei Yuan, Gulcenur Ozturan, Ipek Oruc; Perception of retinal images: Can artificial intelligence help us discover new diagnostic features?. Journal of Vision 2023;23(9):5162. https://doi.org/10.1167/jov.23.9.5162.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Medical images are a rich source of information regarding health. Diagnosticians are trained to sift through them to detect subtle signs of pathological processes, and to ignore vast variations unrelated to pathology. Retinal images are routinely used in the diagnosis and management of ocular diseases. Might there be signs of pathology in a retinal image, beyond eye diseases, that are hiding in plain sight, but currently overlooked? Convolutional neural networks (CNN) trained on retinal fundus images can classify patient sex, a trait that is invisible to the diagnostician (e.g. ophthalmologist) in this modality. Recent work in the interpretation of a CNN model trained for sex classification has elucidated features within fundus images that were relevant to this task (Delavari et al., 2022). Using patient sex as a case study, we investigated whether human observers can be trained to recognize “invisible” patient traits from fundoscopic images. We examined a group of diagnosticians (Expert, N=23) and a comparison group (Non-expert, N=31). In the pre-training phase, baseline sex recognition was assessed via a 2-alternative forced-choice (2-AFC) task without feedback. This was followed by a training phase and practice trials with feedback. Finally, a post-training 2-AFC sex recognition test and a novel object memory test (NOMT) (Richler et. al., 2017) to assess general object recognition ability were completed. Results for the pre-test are consistent with chance-level performance, M=52% for Experts, and M=52% for Non-experts, as expected. Post-test performance was significantly improved for Experts with M=66.1% (d= 2.38, p<<0.01) and for Non-experts M=66.2% (d=1.67, p<<0.01). Performance on the NOMT test was not related to improvement in fundus classification. Together, these results demonstrate that diagnosticians can be trained to recognize novel retinal features suggested by artificial intelligence. Future work with this approach can be extended to discover signs of systemic and neurodegenerative disease in retinal images.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×