Abstract
Heron et al (2012) have argued that the perceived duration of visual events is the result of processing by duration-selective channels, akin to those for the spatial frequency domain. This was based on the duration aftereffect, where the perceived duration of an event is repulsed away from its physical duration following adaptation. However, the question is: what aspect of duration do observers adapt to: the (physical) time elapsed between the on- and offset of a stimulus, or its perceived duration? To dissociate between these options, observers were repeatedly presented with a radial pattern (adaptation stimulus, 2 deg in diameter). The adaptation stimulus could either be a rotating radial grating (2.1 cycles/s) with a physical duration of 0.3 s and an average perceived duration of 0.57 s (temporal-frequency-induced-time-dilation or TFITD condition), a static radial grating with the same duration (static-baseline or SB condition), or a static grating with a duration matched, for each observer, to the perceivedduration of the rotating stimulus (i.e. 0.57 s on average; static-dilation-match or SDM condition). After repeated exposure (100 repetitions for the first; four for consecutive trials), observers compared the perceived duration of a visual target stimulus to an auditory reference. The results show that the SB condition leads to a longer perceived duration of the visual target stimulus than the SDM condition. Moreover, Bayesian analysis showed that the perceived duration of the visual target stimulus in the TFITD condition did not differ from that in the SB condition, but was longer than that in the SDM condition. This shows that observers adapt to a duration defined by the onset and offset of a stimulus, and not to its perceived duration. Our results suggest that channel-based encoding of duration occurs at a processing stage preceding those of which the processing is related to perceived duration.