Abstract
Humans can categorize everyday objects with visual animal features (e.g. a cow mug) as objects, which is also true for deep neural networks. This is not the case in the brain, where ventral occipitotemporal cortex (VTC) representations do not represent these objects as objects, but rather as animals (Bracci et al., 2019). Here, we investigate to what extent such effects might be due to training history. More specifically, we determined whether further training the pre-trained network model AlexNet (i.e. transfer learning) can lead to higher resemblance to VTC performance. We applied three different transfer training regimes: animal <> lookalike <> object (3 categories), animal + lookalike <> object (2 categories), and animal <> object + lookalike (2 categories). After training, image features are extracted from the second to last fully connected layer (FC7). The results show that when training the network with a clustered dataset (e.g. animal + lookalike) the representational similarity in FC7 for the individual clustered categories (e.g. animals and lookalikes) increases relative to the third category. Furthermore, after training with the animal + lookalike clustering, FC7 representations are more correlated with VTC representations than after any other training regime. These findings support the hypothesis that the bias in human cortex to represent lookalikes as animals could be due to the training history during human development. Bracci, S., Ritchie, J. B., Kalfas, I., & Op de Beeck, H. P. (2019). The Ventral Visual Pathway Represents Animal Appearance over Animacy, Unlike Human Behavior and Deep Neural Networks. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 39(33), 6513–6525. https://doi.org/10.1523/JNEUROSCI.1714-18.2019