Abstract
The mechanisms that underlie time perception have been the object of an ongoing debate. Some theories propose a single centralized mechanism responsible for the encoding of duration, whereas others propose that time perception is the product of a network of distributed mechanisms. The proponents of the latter view hypothesize that different mechanisms operate at different time scales. In this study, we aimed to investigate whether a well-known duration aftereffect induced by adaptation to visual motion in the sub-second range, which is often referred to as ‘perceptual timing’, also occurs in the supra-second range (called ‘interval timing’), which is more accessible to cognitive control. In the adaptation phase, participants were required to fixate the centre of the screen, while an adaptor (a drifting Gabor) was displayed on one side of the monitor (initially for 32 seconds, with 8-second top-ups). The speed of the adaptor alternated between 5 and 20 °/s over time in order to minimize changes in perceived target speed. In the test phase, participants were required to judge the relative duration of two intervals containing Gabors drifting at 10 °/s, which were sequentially displayed on either side of a central fixation spot, one in the same location as the adaptor and the other in an unadapted location. We observed that adaptation substantially compressed the apparent duration of a 600 ms interval, whereas it had little effect on a 1200 ms interval. Duration discrimination thresholds after adaptation did not differ between the two durations and they were comparable to those observed without adaptation, implying that the observed differences in perceived duration cannot be ascribed to changes in attention or to noisier estimates. This pattern of results indicates that we can use adaptation to visual motion as a tool to investigate the mechanisms underlying time perception at different time scales.