Abstract
In recent years, there has been increased interest in the use of pattern analysis methods with MEG to study visual processing. In the present study, we examined the explanatory power of several models of visual stimuli to study the nature of the decodable neuromagnetic signal. While their brain activity was recorded with MEG, participants were shown thirty abstract patterns constructed from multiple Gabor elements. The patterns varied along three dimensions (number of elements, local orientation, and orientation coherence among elements). From the MEG data, we measured "decodability" for all possible pair wise comparisons between stimuli as a function of time. We then compared decoding performance to each model's predictions. The first model was based purely on retinotopic stimulation. This was an excellent predictor, particularly early in the time series, thus showing retinotopic differences between stimuli is an important factor in determining decodability. We next examined three models used previously to study whether decoding methods in fMRI confer subvoxel spatial resolution (i.e. decoding orientation columns in visual cortex). These three models were based on local orientation disparity, the radial bias, and the horizontal/vertical preference. Interestingly all three models had little predictive power. We next tested HMAX, a biologically inspired model of early visual processing. The four early layers of HMAX provided a good account of the MEG data, indicating that global pattern differences, captured by HMAX's multi-scale representation, contribute to decodability. Finally, we were interested in how decodability relates to perception. We created a perceptual model from behavioural ratings of the perceived similarity of the patterns. With the exception of the retinotopic model, similarity judgments were the best predictor before 100ms; and after 100ms, the best model. This final result demonstrates a close correspondence between perception and decodable brain activity measured with MEG – i.e. if it "looks" different, it's decodable.
Meeting abstract presented at VSS 2014