Purchase this article with an account.
Aaron Berk, Gulcenur Ozturan, David Maberley, Özgür Yılmaz, Ipek Oruc; Learning from few examples: Classifying sex from retinal images. Journal of Vision 2020;20(11):255. doi: https://doi.org/10.1167/jov.20.11.255.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase preliminary results for the performance of DL on small datasets to classify patient sex from fundus images — a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for two-class classification. We use stochastic gradient descent to train the model on 1706 retinal fundus images from 853 patients of known sex. We report a test error of 65% on the trained model and area under the curve of the receiving operator characteristic of 0.668. In addition, we analyze ensembles of such neural networks, examining how both ensembling method and the number of models in the ensemble impact classification performance. These results highlight usability and feasibility of DL methods when data is a limiting factor for automated analysis, and suggest a simple pipeline available to non-expert practitioners of DL.
This PDF is available to Subscribers Only