Purchase this article with an account.
Maarten van der Smagt, Irene Buijing, Nathan Van der Stoep; Subjective crossmodal correspondence and audiovisual integration. Journal of Vision 2014;14(10):439. doi: 10.1167/14.10.439.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Research on multisensory integration often makes use of stochastic (race) models to distinguish a response performance enhancement in Reaction Times (RTs) due to multisensory integration from an enhancement due to the probability summation. Only when performance on a multisensory task supersedes that of the race-model it is attributed to multisensory integration. An important factor affecting multisensory integration is the unity assumption, i.e. the degree to which an observer infers that two sensory inputs are of the same source or event. Apart from the obvious spatial and temporal correspondence, other, often more subjective, similarity estimates might play a role as well. Here we investigate how subjective crossmodal correspondence influences multisensory stimulation. Observers first matched the loudness of a 100ms white noise burst to the brightness of a 0.86째 light disc (6.25 cd/m2) presented for 100ms on a darker (4.95 cd/m2) background, using a staircase procedure. In a subsequent speeded detection experiment, the observers indicated as fast and accurately as possible whether an audiovisual, auditory only or visual only target was located to the right or left from fixation. The subjectively matched loudness as well as +5dB and -5 dB loudness values were used as auditory stimuli. Auditory detection was generally faster than visual detection and audio-visual detection was generally fastest. However, only when the subjectively matched loudness was used as auditory stimulus did audio-visual detection supersede the predictions made by the racemodel. This result demonstrates the importance of subjective correspondence in multisensory integration, and may explain earlier results that found a surprising lack of integration.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only