September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Multivariate pattern analysis of MEG and EEG reveals the dynamics of human object processing
Author Affiliations
  • Dimitrios Pantazis
    McGovern Institute for Brain Research, Massachusetts Institute of Technology
  • Radoslaw Cichy
    Department of Education and Psychology, Free University Berlin
Journal of Vision August 2017, Vol.17, 479. doi:https://doi.org/10.1167/17.10.479
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dimitrios Pantazis, Radoslaw Cichy; Multivariate pattern analysis of MEG and EEG reveals the dynamics of human object processing. Journal of Vision 2017;17(10):479. https://doi.org/10.1167/17.10.479.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Despite the increasing popularity of multivariate pattern classification methods for electrophysiological data, little is known about the decoding performance of MEG vs. EEG data. The two modalities measure electromagnetic signals from the same underlying neural sources, yet they have systematic differences in sampling neural activity. Here, we investigated the extent to which such measurement differences consistently bias the information coded in MEG and EEG signals in human visual object recognition. We conducted a concurrent MEG/EEG study while participants (N=15) viewed images of 92 everyday objects and compared MEG/EEG multivariate results in both time and space. Comparison in time relied on evaluating classification time courses directly, and via representational similarity analysis (RSA). Comparison in space relied on fusion of MEG/EEG data with fMRI data based on RSA. This enabled direct localization of MEG/EEG signals with independent fMRI data, bypassing the inherent ambiguities of inverse solutions. Single image classification revealed increased MEG sensitivity to early components (peak at 112ms, 95% CI: 109-124ms), versus increased EEG sensitivity to late components (peak at 181ms; 131-195ms). Despite such bias, categorical information (animate vs. inanimate; faces vs. bodies; and others) was mostly equivalent between the two modalities. Fusion with fMRI also revealed comparable spatiotemporal dynamics for MEG and EEG. However, investigation of V1 and IT revealed unexpected results: while the two modalities equivalently matched fMRI data in V1, MEG was more similar to fMRI in IT than EEG, despite the increased sensitivity of EEG to late components. Overall, we found EEG and MEG were sensitive to partly common and partly unique aspects of visual representations. Together, our results offer a novel comparison of MEG and EEG signals in representational space, and motivate the wider adoption of multivariate analysis methods in both MEG and EEG.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×