While vision dominates over hearing in spatial vision, hearing dominates vision in the perception of time, a phenomenon termed auditory driving (Fendrich & Corballis,
2001; Gebhard & Mowbray,
1959; Recanzone,
2003; Shipley,
1964; Welch, DuttonHurt, & Warren,
1986) or “temporal ventriloquism” (Aschersleben & Bertelson,
2003; Bertelson & Aschersleben,
2003; Burr, Banks, & Morrone,
2009; Hartcher-O'Brien & Alais,
2007; Morein-Zamir, Soto-Faraco, & Kingstone,
2003). Sounds can also alter the perception of a sequence of visual events, inducing the illusory perception of extra visual stimuli (Shams, Kamitani, & Shimojo,
2000,
2002). Sounds not only alter the perceived timing of visual flashes but, in some instances, can improve visual discrimination, for example by increasing their perceived temporal separation (Morein-Zamir et al.,
2003; Parise & Spence,
2009). Auditory driving can also improve orientation discrimination of visual bars (Berger, Martelli, & Pelli,
2003), by increasing the number of representations of the stimulus, which in turn is known to improve discrimination performance (Verghese & Stone,
1995). That hearing dominates over vision in determining perceived event time is qualitatively consistent with the “Bayesian” account of multisensory integration, since auditory temporal cues are much more precise than the visual cues (Burr et al.,
2009; Morein-Zamir et al.,
2003; Recanzone,
2003). However, unlike spatial integration (Alais & Burr,
2004; Ernst & Banks,
2002), the quantitative predictions of the Bayesian model were found to be less than perfect (Burr et al.,
2009).