September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Generative adversarial networks can visualize information encoded by neurons
Author Affiliations & Notes
  • Katerina Malakhova
    Pavlov Institute of Physiology, Russian Academy of Sciences
Journal of Vision September 2019, Vol.19, 210. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Katerina Malakhova; Generative adversarial networks can visualize information encoded by neurons. Journal of Vision 2019;19(10):210.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Understanding the principles of information coding and transmission in the brain is a fundamental goal of neuroscience. This study introduces a novel approach for exploration of the functions of neurons in higher-level areas of the visual system. The approach allows visualizing the representation of information encoded by neurons with deep learning visualization techniques. First, a deep neural network is trained on the experimental data (Sato et al., 2013). The model mimics the behavior of neurons by predicting their firing rate to an arbitrary image. We show that given recordings of neural activity in the IT-cortex, the model can reach a correlation coefficient of 0.8 for specific cortical columns with basic fine-tuned ConvNet architecture. The performance can be further improved by minor changes in the architecture of fully-connected layers. The second stage implies the visualization of the trained model. The properties of its neurons are studied using Generative Adversarial Networks (GANs). The GAN aims to produce an image which causes a strong activation in a selected neuron. Here we use image generation technique introduced by (Nguyen et al., 2017), which in contrast to other visualization approaches, considers a constraint for natural-looking results. This additional regularizer allows for the avoidance of adversarial images (Szegedy et al., 2014). Qualitative evaluation of the results suggests the proposed method captures features seen in the experimental data (Fig. 1). Moreover, the space of generated images is not limited by an experimental dataset what helps to weaken biases in judgments on a neuron’s function caused by a small number of presented stimuli. The latter is particularly valuable for the experiments with strictly limited recording time. Thus, the approach can be a useful addition to existing practices in visual neuroscience.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.