Abstract
Visual cortex must calibrate the receptive fields of billions of neurons in a hierarchy of maps. Modeling this process is daunting, but a promising direction is minimum description length theory (MDL). In MDL, the cortex builds a theory of itself and does this by trading off the bits to represent receptive fields against the bits representing their residual in fitting the input. Although MDL has an attractive Bayesian formulation, algorithms to implement it may incorporate demanding assumptions [1] or exhibit delicate convergence behavior [2]. We show that a new algorithm based on projection pursuit (PP), has fast convergence properties as well as an implementation in a feedback circuit that models intercortical connections between maps. In our PP formulation, neurons with receptive fields (RFs) most similar to the input are selected to represent it. Next, feedback signals from these neurons subtract their portion of the signal from the input, leaving a residual. The process repeats, selecting neurons that are most similar to the residual. The algorithm produces good representations of the input with only four sets of projections. To learn, the active neurons' RFs are incrementally adjusted with a Hebb rule and the process repeats until convergence. We demonstrate the algorithm with two sets of simulations. In the first, the input is stereo pairs of whitened natural images such as would be represented in the LGN. Using this input, the projection pursuit algorithm can produce receptive fields for disparity cells, simple cells and color that closely model those observed experimentally. In the second set of demonstrations, we model a two-level hierarchy in the cortical motion pathway, using artificial direction selective input. The algorithm learns pattern motion selective receptive fields observed in area MT as well as large field motion stimuli such as those found in area MST.
1. B. Olshausen, Nature, 381:607-609, 1996.
2. Z. Zhang, J. of Neurocomputing, 44:715-720, 2001.