December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Measuring and modelling fixational eye movements
Author Affiliations & Notes
  • Allie C. Hexley
    University of Oxford
  • Laura K. Young
    Newcastle University
  • Tom C. B. McLeish
    University of York
  • Hannah E. Smithson
    University of Oxford
  • Footnotes
    Acknowledgements  This project has received funding from the EPSRC (EP/W004534/1); the Leverhulme Trust (VP1-2019-057); a Reece Foundation Fellowship in Translational Systems Neuroscience (Newcastle University) and a UKRI Future Leaders Fellowship (MR/T042192/1).
Journal of Vision December 2022, Vol.22, 4304. doi:https://doi.org/10.1167/jov.22.14.4304
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Allie C. Hexley, Laura K. Young, Tom C. B. McLeish, Hannah E. Smithson; Measuring and modelling fixational eye movements. Journal of Vision 2022;22(14):4304. https://doi.org/10.1167/jov.22.14.4304.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Fixational eye movements (FEMs) comprise periods of drift (slow, meandering motion), with superimposed tremor (fast oscillations), interrupted by microsaccades (fast, jump-like movements). Here, we record FEMs at high spatial and temporal resolution using an adaptive optics scanning laser ophthalmoscope (AOSLO). We use the data to develop models of FEMs, with emphasis on characterising the drift component. Methods: From each of 10 participants, we recorded 50 two-second AOSLO movies during foveal fixation. FEM traces were extracted in post-processing, relative to separately collected retinal image montages. Within each trace we classified periods of microsaccades, drift and superimposed tremor, and tracking failures, using both existing and new automated techniques. We validate each technique against ground-truth data, generated with an AOSLO simulator (ERICA, Young and Smithson, 2021), and manual labelling. 500 ms drift periods were isolated from each trace and evaluated against candidate models selected from random walk models that are common in the statistical (especially polymer) physics literature. Different models capture different behaviours, such as persistence or anti-persistence, self-avoidance and bounding or localisation. Such differences in behaviour can characterise either drift under varying conditions or different timescales of the same drift response. They were evaluated using diagnostic plots, like the autocorrelation function and log-log plots of mean-squared-displacement against time-lag. Results: Diagnostic plots showed high levels of consistency across participants, once microsaccades were removed. There were individual differences in mean drift velocity but random walk characteristics were largely preserved across participants. These characteristics are poorly fitted by existing models and their extraction is dependent on microsaccade detection. Conclusions: We analysed FEMs against different random walk models to characterise drift and we report the model that best fits the new data. Improved methods for extracting high-resolution drift traces from AOSLO recordings are important in delivering data that can discriminate candidate models of ocular drift.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×