Journal of Vision Cover Image for Volume 25, Issue 5
April 2025
Volume 25, Issue 5
Open Access
Optica Fall Vision Meeting Abstract  |   April 2025
Poster Session: Leveraging AI to classify sex based on fovea shape features
Author Affiliations
  • Knectt Lendoye
    Newcastle University
  • Raheleh Kafieh
    Durham University
  • David Steel
    Newcastle University
  • Christian Taylor
    Newcastle University
  • Dexter Canoy
    Newcastle University
  • Jaume Bacardit
    Newcastle University
  • Anya Hurlbert
    Newcastle University
Journal of Vision April 2025, Vol.25, 19. doi:https://doi.org/10.1167/jov.25.5.19
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Knectt Lendoye, Raheleh Kafieh, David Steel, Christian Taylor, Dexter Canoy, Jaume Bacardit, Anya Hurlbert; Poster Session: Leveraging AI to classify sex based on fovea shape features. Journal of Vision 2025;25(5):19. https://doi.org/10.1167/jov.25.5.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We present a new methodology based on AI to classify sex based on fovea shape features extracted from OCT scans, to understand the underlying foveal variability between male and female. Deep neural networks have been used to classify sex from retinal images like colour fundus and OCT B-scans. It is possible to obtain heatmaps of regions of the retinal layers that are responsible for the network decision. However, it is still challenging to identify the exact set of features that influence these networks. Our methodology consists of leveraging AI to extract more than 50 foveal features and train machine learning classifiers, following theses steps: 1) segmentation of 4000 OCT scans of good image quality and of selected healthy controls (no eye disorders) from the UK Biobank; 2) feature extraction on segmented layer boundaries from four commonly used (fovea pit diameter, depth, nasal and temporal slopes), to extended features including the above mentioned for all boundaries and each layer thicknesses; 3) training machine learning classifiers and ranking distinct retinal features by importance. The performances of the top classifiers went from 0.55 ROC, slightly better than random chance on 4 initial features to 0.65 ROC, with 49 features, on a single B-scan segmentation. The results highlight the promise of our method in applying AI for the discovery of meaningful retinal biomarkers and the analysis of the fovea shape morphology by identifying appropriate foveal features.

Footnotes
 Funding: Neuroscience Fund, Centre for Transformative Neuroscience (NUCoRE), Newcastle University
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×