Abstract
In the McGurk effect, incongruent auditory and visual syllables are perceived as a third, completely different syllable. Recent evidence suggests that there is a great deal of variability in the level at which the effect is perceived across different stimuli and different individuals. We describe a new model to characterize these differences based on the framework of Bayesian perceptual modeling. Three types of parameters are used: two for each participant (sensory noise and a fusion threshold) and one for each stimulus (the stimulus fusion strength). By incorporating sensory noise, the model is able to account for variability within individuals across multiple presentations of the same stimulus. Together with the threshold parameter, the sensory noise parameter explains variable responses to the same stimulus across participants; the stimulus strength parameter accounts for variable responses to different stimuli across participants. The model accurately described behavior in a dataset of 165 participants viewing up to 14 different McGurk stimuli. We demonstrate the utility of model by using it to explain apparently contradictory results in the literature about the prevalence of the McGurk effect in children with autism spectrum disorder. By separately estimating participant and stimulus parameters, the model eliminates the confound of stimulus differences to allow for both prediction and comparison of McGurk effect perception.
Meeting abstract presented at VSS 2014