September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Decoding the representational dynamics of object recognition with MEG, behavior, and computational models
Author Affiliations
  • Brett Bankson
    Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health
  • Martin Hebart
    Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health
  • Chris Baker
    Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health
Journal of Vision August 2017, Vol.17, 284. doi:10.1167/17.10.284
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Brett Bankson, Martin Hebart, Chris Baker; Decoding the representational dynamics of object recognition with MEG, behavior, and computational models. Journal of Vision 2017;17(10):284. doi: 10.1167/17.10.284.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies using electrophysiological recordings have identified the time course of category representation during the first several hundred milliseconds of object recognition, but less is known about the perceptual and semantic features reflected by this information (Cichy et al., 2016, Clarke et al., 2012). Here we apply machine learning methods and representational similarity analysis (RSA) to MEG recordings in order to elucidate the temporal evolution of representations for concrete visual objects. During MEG recording, 32 participants were repeatedly presented with object stimuli while completing a visual oddball task. Half of the participants were exposed to one set of 84 object exemplars, while the other half was presented with different exemplars of the same concepts. The 84 object concepts were selected based on lexical frequency. We used a support vector classifier to produce pairwise decoding accuracies between all object items at all time points, which served as dissimilarity matrices for later analyses. Complementary behavioral data from an object arrangement task were included in our analyses, as well as model predictions from a semantic model and a convolutional neural network. MEG analyses showed robust pairwise decoding of object images, peaking around 100 ms post-stimulus onset. Before 150 ms, the MEG data contained information similar to the the early layers of a convolutional neural network (CNN), suggesting early discriminability in patterns of neural activity based on visual information before 150 ms. From 200-450 ms, the MEG data show persistent similarity across visual exemplars for the same concept. Further, there was high correlation with the behavioral data, mid-level CNN layers, and the semantic model. Together, these results suggest the emergence of an abstract behaviorally-relevant representation of concrete object concepts peaking between 250-300 ms.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×