August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Mixing deep neural network features to explain brain representations
Author Affiliations
  • Seyed-Mahdi Khaligh-Razavi
    CSAIL, MIT, MA, USA
  • Linda Henriksson
    Department of Neuroscience and Biomedical Engineering, Aalto University, Aalto, Finland
  • Kendrick Kay
    Center for Magnetic Resonance Research, University of Minnesota, Twin Cities
  • Nikolaus Kriegeskorte
    MRC-CBU, University of Cambridge, UK
Journal of Vision September 2016, Vol.16, 369. doi:https://doi.org/10.1167/16.12.369
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Seyed-Mahdi Khaligh-Razavi, Linda Henriksson, Kendrick Kay, Nikolaus Kriegeskorte; Mixing deep neural network features to explain brain representations. Journal of Vision 2016;16(12):369. https://doi.org/10.1167/16.12.369.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Higher visual areas present a difficult explanatory challenge and can be better studied by considering the transformation of representations across the stages of the visual hierarchy from lower- to higher-level visual areas. We investigated the progress of visual information through the hierarchy of visual cortex by comparing the representational geometry of several brain regions with a wide range of object-vision models, ranging from unsupervised to supervised, and from shallow to deep models. The shallow unsupervised models tended to have higher correlations with early visual areas; and the deep supervised models were more correlated with higher visual areas. We also presented a new framework for assessing the pattern-similarity of models with brain areas, mixed representational similarity analysis (RSA), which bridges the gap between RSA and voxel-receptive-field modelling, both of which have been used separately but not in combination in previous studies (Kriegeskorte et al., 2008a; Nili et al., 2014; Khaligh-Razavi and Kriegeskorte, 2014; Kay et al., 2008, 2013). Using mixed RSA, we evaluated the performance of many models and several brain areas. We show that higher visual representations (i.e. lateral occipital region, inferior temporal cortex) were best explained by the higher layers of a deep convolutional network after appropriate mixing and weighting of its feature set. This shows that deep neural network features form the essential basis for explaining the representational geometry of higher visual areas.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×