Abstract
When a flash is presented adjacent to a moving stimulus, the flash appears to lag behind. To date, there have been three major explanations of this flash-lag effect: motion extrapolation, positional averaging, and differential latency. To test these models, a new stimulus configuration was introduced. An upright bar (10′ × 130′) served as a moving stimulus and was horizontally displaced to a new random position (within ± 75′ around the fovea) every 0.167 s. Another upright rectangle (10′ × 40′) was flashed at a random time and at a random horizontal position (within ± 25′ around the fovea) that was vertically adjacent to the jumping bar. The observer judged whether the flash appeared to the left or right of the jumping bar. A spatiotemporal correlogram was made, on which the % right responses to the flash were plotted at the time and position relative to the current jumping bar. The ideal observer would show 100% correct responses to the flashes presented simultaneously with the current jumping bar, and would perform only at the chance level otherwise. However, the actual correlogram showed a temporal shift of correct judgment, as though the observer judged the flash's position relative to the bar that was presented some 0.1 s after the flash. Thus, the flash-lag effect was found in random motion. The analysis revealed that none of the above three models could well predict the data. Instead, all the results were explained by differential latency plus the assumption that the latency fluctuates according to a Gaussian probability density function along time, with its mean and standard deviation being approximately 60–80 ms and 50–80 ms respectively. These results suggest that the perceived position of the flash relative to the moving bar is the result of comparison between the flash's position and multiple positions of the motion trajectory after the flash's physical onset.