October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Learning from few examples: Classifying sex from retinal images
Author Affiliations
  • Aaron Berk
    University of British Columbia
  • Gulcenur Ozturan
    Istanbul Okmeydanı Training and Research Hospital
  • David Maberley
    University of British Columbia
  • Özgür Yılmaz
    University of British Columbia
  • Ipek Oruc
    University of British Columbia
Journal of Vision October 2020, Vol.20, 255. doi:https://doi.org/10.1167/jov.20.11.255
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aaron Berk, Gulcenur Ozturan, David Maberley, Özgür Yılmaz, Ipek Oruc; Learning from few examples: Classifying sex from retinal images. Journal of Vision 2020;20(11):255. doi: https://doi.org/10.1167/jov.20.11.255.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase preliminary results for the performance of DL on small datasets to classify patient sex from fundus images — a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for two-class classification. We use stochastic gradient descent to train the model on 1706 retinal fundus images from 853 patients of known sex. We report a test error of 65% on the trained model and area under the curve of the receiving operator characteristic of 0.668. In addition, we analyze ensembles of such neural networks, examining how both ensembling method and the number of models in the ensemble impact classification performance. These results highlight usability and feasibility of DL methods when data is a limiting factor for automated analysis, and suggest a simple pipeline available to non-expert practitioners of DL.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×