Abstract
Probabilistic inference lies at the heart of many crucial brain processes, such as primary visual processing, attentional modulation, multi-sensory integration, reference frame transformations, decision making, etc. It is possible that inference is implemented by marginalization across variables through explicit divisive normalization. However, direct evidence for such processes in the brain is sparse and further, for all but the simplest distributions, explicit marginalization requires intractable normalization operations. Here, we argue that explicit divisive normalization is not the only way marginalization can be performed and we propose an alternative, physiologically more realistic mechanism. This alternative mechanism (implicit approximate normalization: IAN) is based on well-established parallel computing and machine learning principles and is functionally equivalent to divisive normalization without requiring intractable sums/integrals. Specifically, we implemented multi-layer feed-forward neural networks and trained them to carry out several tasks using a pseudo-Newton method with preconditioned conjugate gradient descent. Doing so, we explicitly modelled near optimal multi-sensory integration, reference frame transformations and both in combination. We did so using different neural coding schemes within the same network, i.e. probabilistic spatial codes and probabilistic joint codes. We also implemented comparable spiking networks with realistic synaptic dynamics, demonstrating the feasibility of IAN at the spiking neuron level. Our networks produce a wide range of behaviours, similar to observations of real neurons in the brain. These include inverse effectiveness, the spatial correspondence principle, super-additivity, gain-like modulations and multi-sensory suppression. One advantage of IAN is that it works regardless of the coding scheme used in individual neurons, while divisive normalization requires explicitly matching population codes. In addition, IAN does not need a neatly organized and regular connectivity structure between contributing neurons, such as required by divisive normalization. Overall, our study demonstrates that marginalizing operations can be carried out in simple networks of purely additive neurons without explicit divisive normalization.
Meeting abstract presented at VSS 2012