Purchase this article with an account.
Kasper Vinken, Gabriel Kreiman; Adaptation in models of visual object recognition. Journal of Vision 2019;19(10):210a. doi: https://doi.org/10.1167/19.10.210a.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Convolutional neural network (CNN) models of the ventral stream provide an unprecedented opportunity to relate neural mechanisms to sensory representations and even perception. Current CNNs lack the temporal dynamics of biological vision such as adaptation to previous stimulation. Perceptually, adaptation has been widely studied in the form of aftereffects (Webster, 2015), while in single neurons adaptation is often equated to repetition suppression (Vogels, 2016). Whereas the two are often thought to be associated, they remain to be integrated in a truly general framework. One proposed mechanism underlying repetition suppression is a reduced excitability depending on previous neural activity, called response fatigue. Here, we implemented fatigue in each unit of a CNN (Krizhevsky et al., 2012) and asked whether it could account for more complex phenomena of neural and visual adaptation. Specifically, we assigned a latent fatigue variable to each unit that increased after high, but decreased after lower activation. The activation of a unit was then obtained by subtracting its fatigue from its input activity (before the linear rectifier). The resulting CNN units showed repetition suppression matching neural adaptation on several hallmark properties: stimulus-specificity, increased adaptation in higher layers, adaptation degree proportional with the number of repetitions (Vinken et al., 2017), and decreased adaptation with longer interstimulus intervals (Sawamura et al., 2006). Furthermore, the response patterns could account for the perceptual effects we tested: from afterimages in the first layer to a face gender aftereffect in later layers (Webster et al., 2004). Thus, when considered in a CNN, a simple mechanism of response fatigue operating at the level of single neurons can account for complex adaptation effects. In addition to providing a general model for adaptation, these results demonstrate the strength of using deep neural networks to connect low-level canonical neural properties or computations to high-level neural and perceptual phenomena.
This PDF is available to Subscribers Only