August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Audiovisual integration and spatial alignment in azimuth and depth.
Author Affiliations
  • Nathan Van der Stoep
    Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands
  • Tanja Nijboer
    Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands
  • Stefan Van der Stigchel
    Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands
Journal of Vision September 2016, Vol.16, 865. doi:https://doi.org/10.1167/16.12.865
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nathan Van der Stoep, Tanja Nijboer, Stefan Van der Stigchel; Audiovisual integration and spatial alignment in azimuth and depth.. Journal of Vision 2016;16(12):865. https://doi.org/10.1167/16.12.865.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Sound and light that originates from the same spatial location tends to be integrated into a unified percept, whereas it is less likely that this information is integrated when it originates from different locations. This 'principle of spatial alignment' has been demonstrated in several neurophysiological and behavioral studies of audiovisual integration but has only been investigated in a single depth-plane. In the current study it was investigated how spatial alignment of auditory and visual information in azimuth and in depth modulated audiovisual integration using a redundant target effect (RTE) task. The participants were instructed to respond as fast as possible to unimodal visual, unimodal auditory, and audiovisual stimuli that appeared to the left and the right of fixation, but to withhold their response when a stimulus was presented at the central location. Visual stimuli were presented in near space but only varied in azimuth while auditory stimuli were presented both in near and far space and varied in azimuth. On multisensory trials visual stimuli were accompanied by an auditory stimulus that was aligned or misaligned in azimuth and aligned or misaligned in depth. Each participant was well able to localize the auditory stimuli in azimuth and in depth as indicated by a separate six-alternative forced-choice auditory localization task that took place before the RTE task. The amount of multisensory response enhancement (MRE) and race model inequality (RMI) violation was compared between the different spatial alignment conditions to see how audiovisual integration was affected by spatial alignment in azimuth and depth. The amount of MRE and RMI violation was significantly modulated by spatial alignment of sound and light in azimuth, but not by spatial alignment in depth. These results indicate that when monitoring spatial alignment in depth is task-irrelevant, spatial alignment in azimuth is sufficient for evoking audiovisual integration.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×