October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
A real-time model of retinal stimulation in virtual environments
Author Affiliations & Notes
  • Daniel Panfili
    University of Texas at Austin
  • Karl Muller
    University of Texas at Austin
  • Mary Hayhoe
    University of Texas at Austin
  • Footnotes
    Acknowledgements  NIH Grant EY05729
Journal of Vision October 2020, Vol.20, 1566. doi:https://doi.org/10.1167/jov.20.11.1566
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Panfili, Karl Muller, Mary Hayhoe; A real-time model of retinal stimulation in virtual environments. Journal of Vision 2020;20(11):1566. https://doi.org/10.1167/jov.20.11.1566.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The experimental control granted by virtual reality (VR) allows investigation of complex behaviors involving naturalistic stimuli. VR engines provide direct access to the images viewed by subjects, facilitating analyses of image properties that are often difficult to extract from real scenes. For example, current computer vision algorithms of optic flow are often inaccurate in complex scenes with significant depth variations. We have developed a prototypical model of the eye to allow for real-time recording of visual stimuli projected onto the retina from the virtual environment. The Panfili Functional Eye (PFE) model uses real-time ray tracing to compute the optic flow stimuli rendered by the VR engine. The PFE uses a pinhole model of the eye with refraction, with input parameters designating resolution, field of view, and movement method. An array of virtual photoreceptors is generated along the surface of a virtual retina. The virtual photoreceptors cast rays through the pupil, where refraction is applied using Snell’s Law. These rays are then cast out to the virtual environment, returning information such as world position, normal of the vector, and coordinates on the UV/Lightmap of the object. The primary goal of the model is to describe the geometric projection of the virtual environment onto the retina in real-time. High-fidelity and low-latency retinal modeling has not been previously possible due to the technical limitations of ray tracing. The model performs as much as 150 times faster than comparable methods, a metric which should increase exponentially with the use of parallel processing. The PFE is modular to allow for the incorporation of more complex optical models, simulating eye conditions, and the analysis of other visual features. Our current application uses the model to compute optic flow patterns of experimental stimuli contingent on direction of gaze while subjects walk freely in a virtual environment.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×