October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Geometric model of the determinants of retinal flow during natural viewing
Author Affiliations & Notes
  • Paul MacNeilage
    University of Nevada, Reno
  • Christian Sinnott
    University of Nevada, Reno
  • Peter Hausamann
    Technical University of Munich
  • Footnotes
    Acknowledgements  Research was supported by NIGMS of NIH under grant number P20 GM103650 and by NSF under grant number OIA-1920896.
Journal of Vision October 2020, Vol.20, 1492. doi:https://doi.org/10.1167/jov.20.11.1492
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Paul MacNeilage, Christian Sinnott, Peter Hausamann; Geometric model of the determinants of retinal flow during natural viewing. Journal of Vision 2020;20(11):1492. doi: https://doi.org/10.1167/jov.20.11.1492.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual motion at the retina is driven predominantly by movement of the eye relative to the stationary environment and depends on the distance from the eye to the nearest environmental surface. To better understand how retinal motion depends on eye motion and scene structure, we developed a geometric model. Movement of the eye in space is modeled as the sum of head-in-space and eye-in-head motion. As input, the model takes information about 3DOF linear and angular head velocity, 2DOF head orientation relative to gravity, and 2DOF eye-in-head position. The model makes two important assumptions: 1) compensatory eye movements (i.e. the vestibulo-ocular reflex) work to cancel head-generated flow at the fovea, and 2) the environment consists of an earth-horizontal ground plane such that distance from the eye to the nearest surface is completely specified by eye height, head orientation relative to gravity, and eye-in-head position. To generate predictions using the model, human subjects walked around campus as we recorded head velocity in space and head orientation relative to gravity using an Intel RealSense tracking camera (t265), as well as eye-in-head position using a Pupil Labs Core binocular eye tracker. The model predicts that retinal flow is driven strongly by linear head velocity with faster motion in the lower visual field, due to the orientation of the eye relative to the ground plane, and toward the retinal periphery, due to the effect of stabilizing eye movements. We compare these predictions with an approximate reconstruction of retinal flow based on a sub-sampling of the eye tracker world video centered on the gaze point, and we discuss factors responsible for deviation of the model prediction from this reconstruction.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×