Purchase this article with an account.
Christine W J Chee-Ruiter, Patricia A Neil, Christian Scheier, David J Lewkowicz, Shinsuke Shimojo; Development of multimodal spatial integration and orienting behavior in humans. Journal of Vision 2003;3(9):774. doi: 10.1167/3.9.774.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The spatial location of objects and events is often specified by concurrent auditory and visual inputs. Adults of many species, including humans, take advantage of such multimodal redundancy in spatial localization. Previous studies have shown that adults respond more quickly and reliably to multimodal compared to unimodal stimuli localization cues. The current study investigated for the first time the development of audio-visual integration in spatial localization in infants between 1–10 months of age. Infants were presented with a series of unimodal or spatially and temporally coincident bimodal lights and sounds at +/−25 and +/−45 degrees from center, and their head and eye orienting responses were measured frame-by-frame from digital video records. Subjects' data were aggregated into 2-month age bins with 12 infants per group for both unimodal and bimodal experiments. Results showed that infants older than four months responded significantly faster to bimodal stimuli than to either visual or auditory stimuli alone (p < 0.01), whereas younger infants responded to all stimuli, either bimodal or unimodal, uniformly. Unimodal and bimodal reaction times tended to decrease with age. Characteristic orienting behaviors changed qualitatively with age. Our results are consistent with neurophysiological findings from multimodal sites in the superior colliculus of infant monkeys in showing that multimodal enhancement of responsiveness is not present at birth but that it emerges during the first months of life.
This PDF is available to Subscribers Only