Abstract
Decoding the visual stimuli from large-scale recordings of neurons in the visual cortex is key to understanding visual processing in the brain and could enable the groundwork for a successful brain-computer interface. Data-driven development of a comprehensive decoder requires simultaneous measurements from hundreds of thousands of neurons in the brain in response to a large number of image stimuli. Measuring this amount of simultaneous neural data with high temporal frequency is extremely challenging given the current state of neural recording technologies. Here, we leverage a large-scale biologically realistic model of the visual cortex to investigate neural responses and reconstruct visual experience. We utilized a biophysical model of the mouse primary visual cortex (V1) consisting of 230,000 neurons in 17 different cell types. Using this model, we simulated the simultaneous neural responses to 80,000 natural images. We then developed a computational framework to reconstruct the visual stimuli with plausible geometric information and semantic details. Our framework is based on a conditional generative adversarial structure to learn the self-supervised representation of the mouse V1 neuronal responses, with a generative model that reconstructs the stimulus images from the latent space of the model. To build this latent space, we trained a decoder to differentiate whether the representation of the V1 neuronal responses matches the stimulus images. Meanwhile, a constantly evolving generator is learned to reconstruct the geometrically interpretable images. Our framework generates stimuli images with high reconstruction accuracy and could be eventually tested on real neuronal responses from the mouse visual cortex.