Abstract
Non-invasive mapping of brain function and connectivity is an important and fundamental area of human cognitive neuroscience including vision research. Direct neuronal activity with fluctuations in the sub-millisecond time-scale can measured non-invasively with magnetoencephalography (MEG) and electroencephalography (EEG). In the past decade, there has been rapid development of whole-head MEG and EEG sensor arrays and of algorithms that enable image dynamic brain activity from MEG data - a combination referred to as electromagnetic source imaging (ESI). However, current techniques for functional brain mapping using ESI suffer from important shortcomings. Current uses for MEG and EEG are limited to the localization of responses to simple stimuli where responses are presumed to arise from single to a very small numbers of sources. Reliable reconstruction of more complex multi-source activity patterns have proved to be challenging due to limitations in current source localization algorithms used in ESI. Moreover, low signal-to-noise ratio of the cortical responses measured by MEG and EEG necessitates averaging of data from multiple trials, leading to lengthy experiments, especially for multiple conditions. Therefore, there is a dire need for reduction in the time taken for imaging and for more automated tools for data analysis. We present recent work from my laboratory on the development of novel and powerful machine learning algorithms for high-fidelity brain imaging with MEG. Some of these advances will be illustrated by reconstructing the spatiotemporal dynamics of cortical networks involved in visually guided reaching, saccadic eye-movements and decision making.
R01DC004855, R01DC006435 and R01NS44590.