August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
What is the nature of the decodable neuromagnetic signal? MEG, Models, and Perception.
Author Affiliations
  • Thomas Carlson
    Department of Cognitive Sciences, Macquarie University, Sydney, Australia
  • Seyed Khaligh-Razavi
    MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
  • Nikolaus Kriegeskorte
    MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
Journal of Vision August 2014, Vol.14, 585. doi:10.1167/14.10.585
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Thomas Carlson, Seyed Khaligh-Razavi, Nikolaus Kriegeskorte; What is the nature of the decodable neuromagnetic signal? MEG, Models, and Perception.. Journal of Vision 2014;14(10):585. doi: 10.1167/14.10.585.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In recent years, there has been increased interest in the use of pattern analysis methods with MEG to study visual processing. In the present study, we examined the explanatory power of several models of visual stimuli to study the nature of the decodable neuromagnetic signal. While their brain activity was recorded with MEG, participants were shown thirty abstract patterns constructed from multiple Gabor elements. The patterns varied along three dimensions (number of elements, local orientation, and orientation coherence among elements). From the MEG data, we measured "decodability" for all possible pair wise comparisons between stimuli as a function of time. We then compared decoding performance to each model's predictions. The first model was based purely on retinotopic stimulation. This was an excellent predictor, particularly early in the time series, thus showing retinotopic differences between stimuli is an important factor in determining decodability. We next examined three models used previously to study whether decoding methods in fMRI confer subvoxel spatial resolution (i.e. decoding orientation columns in visual cortex). These three models were based on local orientation disparity, the radial bias, and the horizontal/vertical preference. Interestingly all three models had little predictive power. We next tested HMAX, a biologically inspired model of early visual processing. The four early layers of HMAX provided a good account of the MEG data, indicating that global pattern differences, captured by HMAX's multi-scale representation, contribute to decodability. Finally, we were interested in how decodability relates to perception. We created a perceptual model from behavioural ratings of the perceived similarity of the patterns. With the exception of the retinotopic model, similarity judgments were the best predictor before 100ms; and after 100ms, the best model. This final result demonstrates a close correspondence between perception and decodable brain activity measured with MEG – i.e. if it "looks" different, it's decodable.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×