September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Assessing Reproducibility of MEG and fMRI Data Fusion Method in Neural Dynamics of Object Vision
Author Affiliations & Notes
  • Benjamin Lahner
    Computer Science and Artificial Intelligence Lab., MIT
    Boston University
  • Yalda Mohsenzadeh
    Computer Science and Artificial Intelligence Lab., MIT
  • Caitlin Mullin
    Computer Science and Artificial Intelligence Lab., MIT
  • Radoslaw Cichy
    Free University Berlin
  • Aude Oliva
    Computer Science and Artificial Intelligence Lab., MIT
Journal of Vision September 2019, Vol.19, 113. doi:https://doi.org/10.1167/19.10.113
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Lahner, Yalda Mohsenzadeh, Caitlin Mullin, Radoslaw Cichy, Aude Oliva; Assessing Reproducibility of MEG and fMRI Data Fusion Method in Neural Dynamics of Object Vision. Journal of Vision 2019;19(10):113. doi: https://doi.org/10.1167/19.10.113.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual object recognition arises from a series of spatio-temporal activity patterns within the ventral and dorsal streams. To reveal human brain neural dynamics, Cichy et al. (2014, 2016) integrated measurements of functional magnetic resonance imaging (fMRI) with magnetoencephalography (MEG). Here, we assess the power of this fMRI-MEG fusion method in producing replicable results for visual recognition. First, we evaluated the reliability of the fusion method at capturing the spatiotemporal dynamics of representations by assessing the neural agreement of visually similar experiences within individuals. For this, we collected fMRI and MEG data while participants (N=15) viewed 156 natural images, performing an orthogonal vigilance task. The images were arranged in twin sets (two sets of 78 images each) with pair images sharing high similar verbal semantic description and with no significant difference in low level image statistics between each set. Fusion method revealed highly consistent spatiotemporal dynamics for the twin sets showing neural representations starting in the occipital pole (~70–90ms after stimulus onset), followed by neural representations in anterior direction along ventral stream and up to the inferior parietal cortex in dorsal pathway. Second, we tested the generalizability of the fusion method by replicating Cichy et al. (2016) and comparing the depicted spatiotemporal dynamics with the twins set temporal dynamics. Despite variations in stimulus set and participant groups, we again found highly overlapping spatio-temporal patterns starting in early visual cortex (~70–80ms) and extending to higher perceptual regions around 110–130ms with no significant difference between the two experimental settings. In sum, these results reveal the reliability and generalizability of fMRI-MEG fusion method and demonstrate that this method is an appropriate analytical tool to non-invasively evaluate the spatiotemporal mechanisms of perception.

Acknowledgement: This research was funded by NSF grant number 1532591, in Neural and Cognitive Systems as well as the Vannevar Bush Faculty Fellowship program funded by the ONR grant number N00014-16-1-3116 and the DFG Emmy Noether Grant CI 241/1-1. The experiments were conducted at the Athinoula A. Martinos Imaging Center at the McGovern Institute for Brain Research, Massachusetts Institute of Technology. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×