August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Reconstructing visual experience from a large-scale biologically realistic model of mouse primary visual cortex
Author Affiliations & Notes
  • Reza Abbasi-Asl
  • Yizhou Chi
  • Huibo Yang
  • Kael Dai
    Allen Institute for Brain Sciences
  • Anton Arkhipov
    Allen Institute for Brain Sciences
  • Footnotes
    Acknowledgements  Reza Abbasi-Asl was supported by the National Institute of Mental Health of the National Institutes of Health under award number RF1MH128672.
Journal of Vision August 2023, Vol.23, 5887. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Reza Abbasi-Asl, Yizhou Chi, Huibo Yang, Kael Dai, Anton Arkhipov; Reconstructing visual experience from a large-scale biologically realistic model of mouse primary visual cortex. Journal of Vision 2023;23(9):5887.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Decoding the visual stimuli from large-scale recordings of neurons in the visual cortex is key to understanding visual processing in the brain and could enable the groundwork for a successful brain-computer interface. Data-driven development of a comprehensive decoder requires simultaneous measurements from hundreds of thousands of neurons in the brain in response to a large number of image stimuli. Measuring this amount of simultaneous neural data with high temporal frequency is extremely challenging given the current state of neural recording technologies. Here, we leverage a large-scale biologically realistic model of the visual cortex to investigate neural responses and reconstruct visual experience. We utilized a biophysical model of the mouse primary visual cortex (V1) consisting of 230,000 neurons in 17 different cell types. Using this model, we simulated the simultaneous neural responses to 80,000 natural images. We then developed a computational framework to reconstruct the visual stimuli with plausible geometric information and semantic details. Our framework is based on a conditional generative adversarial structure to learn the self-supervised representation of the mouse V1 neuronal responses, with a generative model that reconstructs the stimulus images from the latent space of the model. To build this latent space, we trained a decoder to differentiate whether the representation of the V1 neuronal responses matches the stimulus images. Meanwhile, a constantly evolving generator is learned to reconstruct the geometrically interpretable images. Our framework generates stimuli images with high reconstruction accuracy and could be eventually tested on real neuronal responses from the mouse visual cortex.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.