Purchase this article with an account.
Robert Sekuler, Victoria Wong; Integration of multimodal cues in temporal segmentation of visual motion. Journal of Vision 2004;4(8):704. doi: 10.1167/4.8.704.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The perceptual world is constructed from a stream of events that play out over space and time. Making sense of this spatio-temporal stream of sensory input requires its segmentation into appropriate, meaningful constituents. But what rules do sensory systems follow in combining the multiple, multi-modal segmentation cues, which characterize natural conditions? For an answer, we used a bistable form of visual motion to gauge the strength of auditory and visual segmentation cues. In this bistable percept, two objects are seen alternatively as moving toward and through one another, or as colliding and then bouncing off one another. Our dependent measure was the probability that the stimulus would be seen as bouncing (Sekuler, Sekuler & Lau, Nature 1997). Two auditory and two visual segmentation cues were presented singly, and in various combinations, and their combined effects assessed. To minimize individual differences, each cue's physical value was adjusted according to individual subjects' sensitivity to that cue; additionally, the effects of the separate cues were equated to one another. Using the psychophysical results from various combinations of these segmentation cues, we evaluated a suite of alternative models for sensory integration. These evaluations showed that the perceptual state of bistable visual motion was governed by a linear sum of the separate cues, but with a strongly differential weighting of individual cues within each modality. A second experiment showed that a cue's effectiveness depends upon its perceptual properties, not merely its physical ones.
This PDF is available to Subscribers Only