Purchase this article with an account.
Shoko Kanaya, Waka Fujisaki, Shin'ya Nishida, Shigeto Furukawa, Kazuhiko Yokosawa; Comparisons of temporal frequency limits for cross-attribute binding tasks in vision and audition. Journal of Vision 2013;13(9):885. doi: https://doi.org/10.1167/13.9.885.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The speed of temporal binding of sensory signals, processed in parallel, can be psychophysically estimated from a critical temporal frequency beyond which observers cannot discriminate the phase relationship between two oscillating stimulus sequences. Fujisaki and Nishida (2010) showed that the temporal limit for visual cross-attribute binding tasks, as well as cross-modal binding tasks,z88;is about 2.5 Hz regardless of attribute combination. Last year, we examined the temporal limits of two auditory cross-attribute binding tasks, and found that the limit of one condition was significantly higher than 2.5 Hz (Kanaya et al., 2012, VSS). However, this experiment did not completely exclude sensory cues produced by peripheral interactions of two auditory sequences. The present study therefore measured the temporal binding limits within and across three auditory attributes (frequency (FREQ)z88;and amplitude (AMP) of a pure tone, and fundamental frequency (F0) of a band-limited pulse train) using stimulus parameters carefully selected to eliminate signal interactions within peripheral channels. The same participants also performed a visual binding task (color-orientation). Results showed that the temporal limits for auditory within-attribute binding tasks were 3.9 (FREQ-FREQ), 5.4 (AMP-AMP) and 3.4 Hz (F0-F0). The limits for auditory cross-attribute binding tasks were 4.0 (FREQ-AMP), 3.6 (FREQ-F0) and 3.3 Hz (AMP-F0), whereas the limit of the visual cross-attribute binding task remained close to 2.5 Hz. Therefore, even under conditions excluding peripheral interactions, the temporal limit obtained with auditory cross-attribute binding tasks can be higher than that of vision. Our findings are consistent with the hypothesis that, while cross-modal and visual cross-attribute binding tasks reflect a high-level attribute-independent binding mechanism, auditory cross-attribute binding tasks, at least those we have tested, can also reflect sensory processing stages earlier than the high-level binding mechanism because neural processing for different auditory attributes is not segregated as clearly as that for different visual attributes.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only