August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
An alternative to explicit divisive normalization models
Author Affiliations
  • Gunnar Blohm
    Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada\nCanadian Action and Perception Network (CAPnet)
  • Timothy Lillicrap
    Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada\nCanadian Action and Perception Network (CAPnet)
  • Dominic Standage
    Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada\nCanadian Action and Perception Network (CAPnet)
Journal of Vision August 2012, Vol.12, 573. doi:https://doi.org/10.1167/12.9.573
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gunnar Blohm, Timothy Lillicrap, Dominic Standage; An alternative to explicit divisive normalization models. Journal of Vision 2012;12(9):573. https://doi.org/10.1167/12.9.573.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Probabilistic inference lies at the heart of many crucial brain processes, such as primary visual processing, attentional modulation, multi-sensory integration, reference frame transformations, decision making, etc. It is possible that inference is implemented by marginalization across variables through explicit divisive normalization. However, direct evidence for such processes in the brain is sparse and further, for all but the simplest distributions, explicit marginalization requires intractable normalization operations. Here, we argue that explicit divisive normalization is not the only way marginalization can be performed and we propose an alternative, physiologically more realistic mechanism. This alternative mechanism (implicit approximate normalization: IAN) is based on well-established parallel computing and machine learning principles and is functionally equivalent to divisive normalization without requiring intractable sums/integrals. Specifically, we implemented multi-layer feed-forward neural networks and trained them to carry out several tasks using a pseudo-Newton method with preconditioned conjugate gradient descent. Doing so, we explicitly modelled near optimal multi-sensory integration, reference frame transformations and both in combination. We did so using different neural coding schemes within the same network, i.e. probabilistic spatial codes and probabilistic joint codes. We also implemented comparable spiking networks with realistic synaptic dynamics, demonstrating the feasibility of IAN at the spiking neuron level. Our networks produce a wide range of behaviours, similar to observations of real neurons in the brain. These include inverse effectiveness, the spatial correspondence principle, super-additivity, gain-like modulations and multi-sensory suppression. One advantage of IAN is that it works regardless of the coding scheme used in individual neurons, while divisive normalization requires explicitly matching population codes. In addition, IAN does not need a neatly organized and regular connectivity structure between contributing neurons, such as required by divisive normalization. Overall, our study demonstrates that marginalizing operations can be carried out in simple networks of purely additive neurons without explicit divisive normalization.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×