Abstract
In this work we introduce DivNormEI, a novel bio-inspired convolutional network that performs divisive normalization, a canonical cortical computation, along with lateral inhibition and excitation that is tailored for integration into modern Artificial Neural Networks (ANNs). DivNormEI, an extension of prior computational models of divisive normalization (Schwartz & Simoncelli, 2001; Robinson et. al, 2007) in the primate primary visual cortex, is implemented as a modular fully differentiable neural network layer that can be integrated in a straightforward manner into most commonly used modern ANNs. DivNormEI normalizes incoming activations via learned non-linear within-feature shunting inhibition along with across-feature linear lateral inhibition and excitation. In this work, we show how the integration of DivNormEI within a task-driven self-supervised encoder-decoder architecture encourages the emergence of the well-known contrast-invariant tuning property found to be exhibited by simple cells in the primate primary visual cortex. In addition, the integration of DivNormEI into an ANN (VGG-9 network) trained to perform large-scale object recognition on static images from the ImageNet-100 dataset improves both sample efficiency and top-1 accuracy on a held-out validation set. We also discuss the ability of a larger hybrid ANN (ResNet-50 with hierarchical placement of DivNormEI) to perform competitively on the more challenging task of semantic image segmentation. We believe our findings from the bio-inspired DivNormEI model that simultaneously explains properties found in primate V1 neurons and outperforms the competing baseline architecture on large-scale object recognition will promote further investigation of this crucial cortical computation in the context of modern machine learning tasks and ANNs.