How are different sources of information integrated in the brain while we overtly explore natural multimodal scenes? It is well established that the speed and accuracy of eye movements in performance tasks improve significantly with congruent multimodal stimulation (Arndt & Colonius,
2003; Corneil & Munoz,
1996; Corneil, Van Wanrooij, Munoz, & Van Opstal,
2002). This supports the claim that sensory evidence is integrated before a motor response. Indeed, recent findings indicate that areas in the brain may interact in many different ways (Driver & Spence,
2000; Macaluso & Driver,
2005). The convergence of unimodal information creates multimodal functionality (Beauchamp, Argall, Bodurka, Duyn, & Martin,
2004; Meredith & Stein,
1986) even at low-level areas traditionally conceived as unimodal (Calvert et al.,
1997; Ghazanfar, Maier, Hoffman, & Logothetis,
2005; Macaluso, Frith, & Driver,
2000); evidence is also currently mounting for early feedforward convergence of unimodal signals (Foxe & Schroeder,
2005; Fu et al.,
2003; Kayser, Petkov, Augath, & Logothetis,
2005; Molholm et al.,
2002).