Journal of Vision Cover Image for Volume 25, Issue 5
April 2025
Volume 25, Issue 5
Open Access
Optica Fall Vision Meeting Abstract  |   April 2025
Contributed Talks I: Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision
Author Affiliations
  • Maria Villamil
    University of Oxford
  • Allie C. Schneider
    University of Oxford
  • Jiahe Cui
    University of Oxford
  • Laura K. Young
    Newcastle University
  • Hannah E. Smithson
    University of Oxford
Journal of Vision April 2025, Vol.25, 5. doi:https://doi.org/10.1167/jov.25.5.5
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maria Villamil, Allie C. Schneider, Jiahe Cui, Laura K. Young, Hannah E. Smithson; Contributed Talks I: Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision. Journal of Vision 2025;25(5):5. https://doi.org/10.1167/jov.25.5.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.

Footnotes
 Funding: None
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×