September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Zero-shot neural decoding from rhesus macaque inferior temporal cortex using deep convolutional neural networks
Author Affiliations & Notes
  • Thomas P O’Connell
    Department of Psychology, Yale University
    Center for Brains, Minds, and Machines, MIT
  • Marvin M Chun
    Department of Psychology, Yale University
    Department of Neuroscience, Yale University
  • Gabriel Kreiman
    Center for Brains, Minds, and Machines, MIT
    Children’s Hospital, Harvard Medical School
Journal of Vision September 2019, Vol.19, 209a. doi:https://doi.org/10.1167/19.10.209a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thomas P O’Connell, Marvin M Chun, Gabriel Kreiman; Zero-shot neural decoding from rhesus macaque inferior temporal cortex using deep convolutional neural networks. Journal of Vision 2019;19(10):209a. doi: https://doi.org/10.1167/19.10.209a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep convolutional neural networks (DCNN) constitute a promising initial approximation to the cascade of computations along the ventral visual stream that support visual recognition. DCNNs predict object-evoked neural activity in inferior temporal (IT) cortex (Yamins et al., 2014), and the mapping between neural and DCNN activity generalizes across object categories (Horikawa and Kamitani, 2017; Yamins et al., 2014). Generalization is critical to build models of the ventral visual stream that capture the generic neural code for shape and make accurate predictions about novel categories beyond the training sample (zero-shot decoding). However, the degree to which mappings between DCNN and IT activity generalize across object categories has not been explicitly tested. To address this, we built zero-shot neural decoders for object category from multi-electrode array recordings in rhesus macaque IT, obtained while viewing images of rendered objects on arbitrary natural scene backgrounds (Majaj et al., 2015). Our zero-shot decoders generalized to predict novel categories despite not being trained on neural activity from the test categories. DCNN activity was computed for each image using VGG-16 (Simonyan and Zisserman, 2015), and DCNN activity was reconstructed from IT activity using linear regression. Linear classifiers were trained to predict object category from DCNN activity, then these classifiers were used to predict object category from DCNN activity reconstructed from IT responses. We held out neural activity from two test categories when learning the IT to DCNN mappings and found robust zero-shot decoding accuracies, indicating that the mappings generalize across categories. Intriguingly, training on a single category alone was sufficient to permit zero-shot decoding of novel categories. We show the relationship between IT and DCNN activity is stable across object categories, demonstrating the feasibility of zero-shot neural decoding systems based on electrophysiological recordings.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×