Research into multisensory perception has shown many examples of crossmodal interactions where auditory signals can bias visual perception (Alais, Newell, & Mamassian,
2010). This has been demonstrated in a wide range of tasks including spatial judgments (Alais & Burr,
2004; Battaglia, Jacobs, & Aslin,
2003), temporal judgments (Bertelson & Aschersleben,
2003; Fujisaki, Shimojo, Kashino, & Nishida,
2004; Vroomen, Keetels, de Gelder, & Bertelson,
2004), motion perception (Arrighi, Marini, & Burr,
2009; Hidaka et al.,
2009; Teramoto et al.,
2012), and perceptual ambiguity (Alais, van Boxtel, Parker, & van Ee,
2010; Holcombe & Seizova-Cajic,
2008; Sekuler, Sekuler, & Lau,
1997; van Ee, van Boxtel, Parker, & Alais,
2009). Detection is faster and more accurate for spatially and temporally congruent audiovisual targets compared to that for unimodal targets (Bolognini, Frassinetti, Serino, & Làdavas,
2004; Driver & Spence,
1998; Frassinetti, Bolognini, & Làdavas,
2002; Stein, Meredith, Huneycutt, & McDade,
1989), and this behavioral benefit has been supported by event-related potential (ERP) data showing evidence of early crossmodal interactions (Giard & Peronnet,
1999; Molholm et al.,
2002). These findings illustrate that visual processing is not isolated from signals in other modalities and that behavioral performance can be improved by interactions between vision and audition.