Abstract
Studies of perceptual learning have traditionally focused on unisensory stimuli. However, multisensory interactions can occur at early stages of visual processing (Watkins et al, 2006, 2007), and therefore might play a role in low-level perceptual learning. Indeed, we recently demonstrated that training on a multisensory motion coherence detection task facilitates visual perceptual learning (Seitz, Kim, & Shams 2006). Furthermore, this facilitation is not due to a general effect of attention, but rather involves processes sensitive to featural relations between the visual and auditory stimuli (Kim et al, 2007). In the current study, we investigate neural mechanisms underlying multisensory learning effects. We scanned the brains of six subjects using functional MRI before and after 10 days of training on the congruent visual-audio motion coherence detection task. In the scanner, subjects performed a motion discrimination task involving congruent and incongruent motion-stimuli for both the trained and the opposite of the trained motion-direction. Comparing multisensory effects pre- and post-training, we observe robust changes in activation that are specific to the trained motion-direction in an impressive variety of brain areas, including subcortical (cerebellum) and “amodal” association cortices (frontal, anterior temporal, superior parietal, anterior cingulate), as well as areas traditionally known as sites of multisensory integration (inferior parietal lobe, superior temporal sulcus). Of particular interest, multisensory learning effects were also observed in visual and auditory cortices, which are typically considered to be “unisensory”. While many of these brain areas have previously been implicated in multisensory processing, this study demonstrates a substantial degree of plasticity in multisensory processing systems. Furthermore, these changes at multiple processing levels may underlie the enhancement in visual learning experienced with multisensory training.