Purchase this article with an account.
Katerina Malakhova; Generative adversarial networks can visualize information encoded by neurons. Journal of Vision 2019;19(10):210. doi: https://doi.org/10.1167/19.10.210.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Understanding the principles of information coding and transmission in the brain is a fundamental goal of neuroscience. This study introduces a novel approach for exploration of the functions of neurons in higher-level areas of the visual system. The approach allows visualizing the representation of information encoded by neurons with deep learning visualization techniques. First, a deep neural network is trained on the experimental data (Sato et al., 2013). The model mimics the behavior of neurons by predicting their firing rate to an arbitrary image. We show that given recordings of neural activity in the IT-cortex, the model can reach a correlation coefficient of 0.8 for specific cortical columns with basic fine-tuned ConvNet architecture. The performance can be further improved by minor changes in the architecture of fully-connected layers. The second stage implies the visualization of the trained model. The properties of its neurons are studied using Generative Adversarial Networks (GANs). The GAN aims to produce an image which causes a strong activation in a selected neuron. Here we use image generation technique introduced by (Nguyen et al., 2017), which in contrast to other visualization approaches, considers a constraint for natural-looking results. This additional regularizer allows for the avoidance of adversarial images (Szegedy et al., 2014). Qualitative evaluation of the results suggests the proposed method captures features seen in the experimental data (Fig. 1). Moreover, the space of generated images is not limited by an experimental dataset what helps to weaken biases in judgments on a neuron’s function caused by a small number of presented stimuli. The latter is particularly valuable for the experiments with strictly limited recording time. Thus, the approach can be a useful addition to existing practices in visual neuroscience.
This PDF is available to Subscribers Only