Abstract
Cortical stimulation in high-level visual areas induces complex perturbations in visual perception. Understanding the nature of stimulation-induced visual percepts is necessary for characterizing visual hallucinations in psychiatric disease and developing visual prosthetics. While most of the evidence comes from anecdotal observations in human patients, systematic study of the topic has been severely limited in the absence of language faculty in nonhuman primates. We developed a new method, perceptography, to take pictures of complex visual percepts induced by optogenetic stimulation of the inferior temporal (IT) cortex in macaque monkeys. Each trial started with the animal fixating on a computer-generated image for 1 second. Halfway through the image presentation, we briefly altered the image features. In half of the trials randomly selected, a ~1x1mm area of IT cortex was optogenetically stimulated via an implanted LED array, for the same duration as the image alteration. The animals were rewarded for successful detection of stimulation trials by looking at one of the two subsequently presented targets. We hypothesized that false alarms (FA) are more likely to happen when an image alteration shares common features with the percept induced by cortical stimulation. In a functional closed loop with the animal, Ahab, our feature extraction deep network, guided DaVinci, a generative adversarial network, to make image alterations that reduce the discriminability between stimulated and nonstimulated trials, thus increasing the chances of FA. While the baseline FA rate for nearly all image alterations remained at 3-7%, Ahab optimized images evolved to induce 55-85% FA (cross-validated, p<0.01). We would like to call these images Perceptograms as the state of seeing them is hard for the animal to discriminate from the state of being cortically stimulated. We have also shown that higher cortical illumination leads to more pronounced alterations in the resulting perceptograms.