Abstract
Research on multisensory integration often makes use of stochastic (‘race-’) models to distinguish performance enhancement in Reaction Times (RTs) due to multisensory integration from enhancement due to statistical facilitation. Only when performance on a multisensory task supersedes that of the race-model it is attributed to multisensory ‘integration’. Previously (VSS 2014) we have shown that the subjective cross-modal correspondence (i.e. a subjective intensity match) influences the degree to which race-model violations occur. Here we investigate how the resulting inter-individual RT-difference to unimodal auditory and visual stimuli affects multisensory integration. Observers first matched the loudness of a 100ms white noise burst to the brightness of a 0.86° light disc (6.25 cd/m2) presented for 100ms on a darker (4.95 cd/m2) background, using a staircase procedure. In a subsequent speeded detection experiment, the observers were instructed to press a key as soon as an audiovisual, auditory only or visual only target was presented to the left or right from fixation. The subjectively matched loudness as well as +5dB and -5dB loudness values were used as auditory stimuli. Catch trials without stimulation were also included. We calculated for each subject, and each auditory condition, the unimodal RT-difference between detecting a visual or auditory stimulus. In addition we calculated the Multisensory Response Enhancement (MRE), and whether the race-model predictions were violated (RMV). We correlated MRE and average RMV with unimodal RT-differences across observers. Interestingly, the results show a significant, negative, correlation between MRE magnitude and unimodal RT-difference, but no correlation between MRE and individual RT’s, nor any correlation between RMV and the unimodal RT-difference. These results are in line with the model proposed by Otto, Dassy and Mammasian (Journal of Neuroscience, 33, 7463-7474, 2013) and indicate that unimodal stimuli that yield similar RTs in an individual lead to the largest Multisensory Response Enhancement.
Meeting abstract presented at VSS 2015