Purchase this article with an account.
Nathan Van der Stoep, Tanja Nijboer, Stefan Van der Stigchel; Audiovisual integration and spatial alignment in azimuth and depth.. Journal of Vision 2016;16(12):865. doi: 10.1167/16.12.865.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Sound and light that originates from the same spatial location tends to be integrated into a unified percept, whereas it is less likely that this information is integrated when it originates from different locations. This 'principle of spatial alignment' has been demonstrated in several neurophysiological and behavioral studies of audiovisual integration but has only been investigated in a single depth-plane. In the current study it was investigated how spatial alignment of auditory and visual information in azimuth and in depth modulated audiovisual integration using a redundant target effect (RTE) task. The participants were instructed to respond as fast as possible to unimodal visual, unimodal auditory, and audiovisual stimuli that appeared to the left and the right of fixation, but to withhold their response when a stimulus was presented at the central location. Visual stimuli were presented in near space but only varied in azimuth while auditory stimuli were presented both in near and far space and varied in azimuth. On multisensory trials visual stimuli were accompanied by an auditory stimulus that was aligned or misaligned in azimuth and aligned or misaligned in depth. Each participant was well able to localize the auditory stimuli in azimuth and in depth as indicated by a separate six-alternative forced-choice auditory localization task that took place before the RTE task. The amount of multisensory response enhancement (MRE) and race model inequality (RMI) violation was compared between the different spatial alignment conditions to see how audiovisual integration was affected by spatial alignment in azimuth and depth. The amount of MRE and RMI violation was significantly modulated by spatial alignment of sound and light in azimuth, but not by spatial alignment in depth. These results indicate that when monitoring spatial alignment in depth is task-irrelevant, spatial alignment in azimuth is sufficient for evoking audiovisual integration.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only