August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
CNNs trained on places and animacy explain different patterns of variance for the same dataset.
Author Affiliations
  • H.Steven Scholte
    Brain & Cognition, Department of Psychology, University of Amsterdam
  • Max Losch
    Brain & Cognition, Department of Psychology, University of Amsterdam
  • Noor Seijdel
    Brain & Cognition, Department of Psychology, University of Amsterdam
  • Kandan Ramakrishnan
    Institute for Informatics, University of Amsterdam
  • Cees Snoek
    Institute for Informatics, University of Amsterdam
Journal of Vision September 2016, Vol.16, 758. doi:https://doi.org/10.1167/16.12.758
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      H.Steven Scholte, Max Losch, Noor Seijdel, Kandan Ramakrishnan, Cees Snoek; CNNs trained on places and animacy explain different patterns of variance for the same dataset.. Journal of Vision 2016;16(12):758. https://doi.org/10.1167/16.12.758.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

With the rise of convolutional neural networks (CNNs), computer vision models of object recognition have improved dramatically in recent years. Most recent progress in computer vision has been spurred by increasing the number of layers within CNN models (so called 'very-deep' learning models). Just like the ventral cortex in the human brain, CNNs show an increase in receptive field size and an increase in neuronal tuning when moving up the neural or computational hierarchy (DiCarlo et al., 2012).However, from neuroscience we know that the brain processes information not only hierarchically but also in parallel (Kravitz et al., 2013). In the current study, we trained a CNN with an Alexnet type architecture (5 convolutional layers, 2 fully connected layers, 1 softmax layer) using two different image sets (animacy or places). Additionally, we evaluated human brain responses towards 120 images (not used for training the CNNs), containing places and animate and inanimate images using BOLD-MRI. For this, we calculated summary statistics, per image, per layer of the CNN and evaluate to what degree we can explain the between image variance. We observe, using the same images, distinctly different patterns of explained variance for the animate trained networks versus the place and inanimate network. The animate trained network explains variance in the middle and inferior temporal gyrus using information from the top two convolutional layers. The summary statistics from the places-trained network explains variance in a range of visual areas using the second fully connected layers and surprisingly, in the parahippocampal complex using the softmax layer. These results suggest, in congruence with our current understanding of the functional achitecture of the brain, that the brain consists of multiple CNNs but also demonstrate that the mapping of CNN vs brain is complex.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×