Purchase this article with an account.
Waka Fujisaki, Shinsuke Shimojo, Makio Kashino, Shin'ya Nishida; Recalibration of audiovisual simultaneity by adaptation to a constant time lag. Journal of Vision 2003;3(9):34. doi: 10.1167/3.9.34.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To find simultaneity between visual and audio events is a challenging problem for our sensory system since there are differences in both physical transmission time and neural processing time. One strategy the brain might take to overcome this difficulty is to adaptively recalibrate the point of simultaneity from daily experience of audiovisual events, rather than using a fixed neural circuit, as has been generally believed. If this is true, and the time constant of the recalibration is short enough, one may be able to see that after adaptation to a fixed temporal lag between visual and auditory events, subjective simultaneity is shifted in the direction to the adapted lag.
Audiovisual stimuli consisted of a white ring (5 deg in diameter) flashed for one refresh on an 85-Hz CRT monitor, and a 10-ms tone pip (1800-Hz) binaurally presented through headphones. During adaptation, an audiovisual pair was repeatedly presented with the average interval of 1.5 s. Each pair was given a constant time lag, ranging from −350 ms to +350 ms, with the negative sign indicating a tone ahead of a flash. In a test trial, the same audiovisual pair was presented with a lag randomly chosen from 13 values between −410 and +410 ms (method of constant stimuli), and subjects were asked to make a yes-no simultaneity judgment. Initial adaptation lasted 3 min, and a top-up adaptation (inserted between test trials) was 10 s.
The results indicate that the adaptation to an audiovisual lag shifted the point of subjective simultaneity in the direction to the adapted lag. The shift size, taken as a distance between the largest shifts in the opposite directions, amounted to 29, 61 and 137 ms for the three subjects we used.
This finding, which we believe is the first demonstration of a cross-modal aftereffect in the temporal domain, is consistent with a hypothesis that the brain adaptively calibrates the subjective audiovisual simultaneity to current environment.
This PDF is available to Subscribers Only