September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
The way of multisensory spatial processing with audio-visual speech stimuli differs in single and bilateral visual presentations
Author Affiliations
  • Shoko Kanaya
    The University of Tokyo, Japan
  • Kazuhiko Yokosawa
    The University of Tokyo, Japan
Journal of Vision September 2011, Vol.11, 799. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shoko Kanaya, Kazuhiko Yokosawa; The way of multisensory spatial processing with audio-visual speech stimuli differs in single and bilateral visual presentations. Journal of Vision 2011;11(11):799. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Ventriloquism is defined as a shift of perceptual location of a sound source toward a synchronized visual stimulus. Previous studies lead to the conclusion that the size of the ventriloquism effect is regulated by physical, not cognitive, factors. However, this conclusion is based upon simplified experimental designs which typically entail pairs of single audio and visual stimuli. Such designs do not capture our responses to a multisensory world where many signals impact various sensory modalities. We examined the hypothesis that cognitive factors, as well as physical ones, modulate the ventriloquism effect in complex situations. We used audio-visual speech stimuli in two experiments involving simplified (Experiment 1) and complex (Experiment 2) designs. Experiment 1 involved presentation of one movie of a face and one voice whereas Experiment 2 involved presentation of two bilateral movies and one voice. In both experiments a cognitive factor, namely congruency of speech and visual stimuli, was varied (congruent, incongruent). In both experiments visual stimuli appeared on a central CRT monitor, whereas auditory stimuli were presented from 13 positions, created by left (L)- right (R) phase differences. Participants judged if the location of an auditory source was left or right of a central fixation cross (on the monitor). In Experiment 1, we found no differences due to cognitive factors. Consistent with previous finding, the auditory localization bias did not differ as a function of congruency. In Experiment 2, we manipulated physical saliency between bilateral visual stimuli as a physical factor, as well as congruency of audio-visual syllables as a cognitive factor. In this experiment, both visual stimulus salience and audio-visual congruency elicited relatively large auditory localizations biases. In conclusion, these experiments show that a cognitive factor affects the way audio-visual spatial information is integrated in a more complex, real world, situation.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.