September 2023
Volume 23, Issue 11
Open Access
Optica Fall Vision Meeting Abstract  |   September 2023
Contributed Session I: Towards General Video Percepts Cone-by-Cone
Author Affiliations
  • Congli Wang
    University of California, Berkeley
  • James Fong
    University of California, Berkeley
  • Hannah K. Doyle
    University of California, Berkeley
  • Sofie R. Herbeck
    University of California, Berkeley
  • Jeffrey Tan
    University of California, Berkeley
  • Austin Roorda
    University of California, Berkeley
  • Ren Ng
    University of California, Berkeley
Journal of Vision September 2023, Vol.23, 8. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Congli Wang, James Fong, Hannah K. Doyle, Sofie R. Herbeck, Jeffrey Tan, Austin Roorda, Ren Ng; Contributed Session I: Towards General Video Percepts Cone-by-Cone. Journal of Vision 2023;23(11):8.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We aim to reprogram visual perception through an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) Display, using a GPU renderer that rasterizes a target color image or video into cone-by-cone single-wavelength laser light pulses ("microdoses"). We imaged and tracked at ~ 2° eccentricity a 0.9° x 0.9° field of view of the retina in 840 nm. Stimulated in 543 nm, all resolved, spectrally classified cones receive microdoses of varying intensities. The renderer updates for each AOSLO frame (30 frames / sec) an underlying stimulation image buffer, encoding a desired color percept pattern that takes into account the cone locations, cone spectral sensitivity to the 543 nm stimulation light, and the corresponding color percept pixel values. Within one frame, the buffer gets pixelated strip-by-strip at 1 kHz into actual world-fixed microdose intensity values, each centered on a cone within that strip at that instant. The resulting frame of microdoses visually occupies the whole raster view. We showed multiple color percepts to a cone-classified subject, with logging data. The subject saw spatially-varying colors, e.g. a red box moving on a green canvas – these percepts validated the accuracy of the prototype. These initial prototyping experiments allude to the potential of presenting general percepts to a cone-classified subject, at cone-level accuracy in a fully programmable way. The technology allows us to probe neural plasticity and towards generation of novel percepts.

 Funding: Funding: This work was supported by the Air Force Office of Scientific Research under award numbers FA9550-20-1-0195 and FA9550-21-1-0230.

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.