Abstract
We present a new methodology based on AI to classify sex based on fovea shape features extracted from OCT scans, to understand the underlying foveal variability between male and female. Deep neural networks have been used to classify sex from retinal images like colour fundus and OCT B-scans. It is possible to obtain heatmaps of regions of the retinal layers that are responsible for the network decision. However, it is still challenging to identify the exact set of features that influence these networks. Our methodology consists of leveraging AI to extract more than 50 foveal features and train machine learning classifiers, following theses steps: 1) segmentation of 4000 OCT scans of good image quality and of selected healthy controls (no eye disorders) from the UK Biobank; 2) feature extraction on segmented layer boundaries from four commonly used (fovea pit diameter, depth, nasal and temporal slopes), to extended features including the above mentioned for all boundaries and each layer thicknesses; 3) training machine learning classifiers and ranking distinct retinal features by importance. The performances of the top classifiers went from 0.55 ROC, slightly better than random chance on 4 initial features to 0.65 ROC, with 49 features, on a single B-scan segmentation. The results highlight the promise of our method in applying AI for the discovery of meaningful retinal biomarkers and the analysis of the fovea shape morphology by identifying appropriate foveal features.
Funding: Neuroscience Fund, Centre for Transformative Neuroscience (NUCoRE), Newcastle University