Abstract
A fundamental aspect of perception is the rapid and reliable combination of sensory information from multiple modalities. Accurate perception of a given multisensory object is therefore highly reliant on the ability to analyze and compare the temporal and spatial information of the input from each modality such that their correspondence is correctly computed. Previous studies have shown that playing video games enhances visual attention as well as visual perception (e.g., Green & Bavelier, 2003; 2007). However, considering that video games are typically multisensory in nature, containing both auditory and visual components, their influence seems likely to reach beyond unimodal visual effects and to alter the processing of multisensory information more generally, a realm that has been little investigated. To address this, we presented subjects with auditory and visual stimuli occurring at varying stimulus onset asynchronies (SOAs) in 50ms increments, from the auditory stimulus (a tone) coming 300ms before the visual (a checkerboard) to 300ms after. Subjects participated in a simultaneity judgment task (did the stimuli appear at the same time or at different times?) and a temporal-order judgment task (which stimulus came first?). For the simultaneity judgment task, non-video-game players showed a broader and more asymmetric window of integration, as they were more likely than video-game players to report the stimuli as simultaneous when the auditory followed the visual. In the temporal-order judgment task, video-game players were more accurate than non-video-game players at the most difficult SOAs (those close to simultaneous). No between-group differences in response times were observed; however, all subjects responded more slowly at the most difficult SOAs. Together, these results suggest that the benefits of playing video games occur not only in the visual modality, but they can also impact the processing of multisensory information, including by altering one's temporal window and accuracy of multisensory integration.
This material is based on work supported under a National Science Foundation Graduate Research Fellowship awarded to Sarah E. Donohue.