Abstract
The idea that the brain contains a generative model of reality is highly attractive, explaining both how a perceptual system can converge on the correct interpretation of a scene through an iterative generate-and-compare process, and how it can learn to represent the world in a self-supervised way. Moreover, if consciousness corresponds directly to top-down generated contents, this would elegantly explain the mystery of why our perception of ambiguous images is always consistent across all levels. However, experimental evidence for the existence of a generative model in the visual system remains lacking. I will discuss efforts by my lab to fill this gap through experiments in the macaque face patch system, a set of regions in inferotemporal cortex dedicated to processing faces. This system is strongly connected in both feedforward and feedback directions, providing an ideal testbed to probe for the existence of a generative model. Our experiments leverage simultaneous recordings from multiple face patches with high channel-count Neuropixels probes to address representation in three realms: (1) representation of ambiguous images, (2) representation of noisy/degraded images, (3) representation of internally generated images evoked by electrical stimulation, drug-induced hallucinations, and dreams. In each case, we ask: is the content and dynamics of representation across the face patch network consistent with a generative model of reality?