Abstract
A major goal of visual neuroscience is to understand how the collective activity of neural populations represents the visual scene. In the retina, recent developments have made it possible to study directly the electrical activity of hundreds of retinal ganglion cells completely covering a region of the visual field, and in some cases, to sample every neuron of specific cell types. This raises the challenge of developing computational models that can explain the light-driven responses of all cells in a local population, and the interactions between them. Using large-scale multi-electrode recordings from isolated primate retina, we showed previously that a generalized linear model (GLM) captured the responses of complete populations of ON and OFF parasol ganglion cells, two major visual pathways in the primate, including the interactions between all pairs of cells in these populations. However, this modeling framework was tested using only white noise stimuli. Recently, we have begun testing the GLM approach with naturalistic stimuli, including photographs of natural scenes with simulated fixational eye movements. The GLM substantially fails to capture population responses in these conditions. Preliminary analysis suggests that the failures of the model cannot be explained by nonlinearities in photoreceptors, and thus probably is attributable to fundamental spatial nonlinearities in the retinal circuitry. These results are broadly consistent with findings in other species and conditions, and suggest the need for advances in our models of population signals in the early visual system.