September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Matching corresponding visual areas with fMRI and MEG
Author Affiliations
  • Phoebe Asquith
    School of Psychology, Cardiff University
  • Simon Rushton
    School of Psychology, Cardiff University
  • Beth Routley
    School of Psychology, Cardiff University
  • Krish Singh
    School of Psychology, Cardiff University
Journal of Vision August 2017, Vol.17, 1053. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Phoebe Asquith, Simon Rushton, Beth Routley, Krish Singh; Matching corresponding visual areas with fMRI and MEG. Journal of Vision 2017;17(10):1053.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

fMRI and MEG have complementary strengths. fMRI offers high spatial resolution, MEG offers high temporal resolution. The challenge is combining information from the two technologies. We recorded BOLD while observers watched a 25min clip of the Skyfall film in a 3T MRI scanner. We used ICA to segment the visually responsive areas of the cortex into clusters of voxels with similar timecourses (and hence tuning for visual features). The timecourse of each cluster (ICA component) was extracted. Observers watched the same film clip in a MEG scanner. Using SAM, we projected the sensor data into synthetic voxels in an anatomical coordinate frame to match the fMRI data. For each voxel, for each frequency band, from delta to ultra gamma, at a temporal resolution of 2 seconds (to match the fMRI data acquisition), we calculated the power (magnitude of the Hilbert envelope). We then cleaned the MEG data and convolved it with a BOLD haemodynamic response function. Using a standard GLM analysis we searched for MEG voxels with timecourses that correlated with the timecourses of a selection of the fMRI-ICA components. From the fMRI data we chose candidate components on the lateral and medial surface of occipital lobe. We found activation in comparable anatomical locations in the MEG data. The use of extended (~20min) broadband visual stimuli (film clips) that impose a temporal structure, coupled with ICA, is a promising way to segment the visually responsive cortex and match across imaging modalities.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.