Purchase this article with an account.
David Alais, Uxía Fernández Folgueiras, Johahn Leung; A common mechanism processes auditory and visual motion. Journal of Vision 2018;18(10):1135. doi: https://doi.org/10.1167/18.10.1135.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Neuroimaging studies suggest human visual area V5, an area specialised for motion processing, responds to movement presented in the visual, auditory or tactile domains. Here we report behavioural findings strongly implying common motion processing for auditory and visual motion. We presented brief translational motion stimuli drifting leftwards or rightwards in either the visual or auditory modality at various speeds. Using the method of single stimuli, observers made a speed discrimination on each trial, comparing the current speed against the average of all presented speeds. Data were compiled into psychometric functions and mean perceived speed was calculated. A sequential dependency analysis was used to analyse the adaptive relationship between consecutive trials. In a vision-only experiment, motion was perceived as faster after a slow preceding motion, and slower after a faster motion. This is a negative serial dependency, consistent with the classic 'repulsive' motion aftereffect (MAE). In an audition-only experiment, we found the same negative serial dependency, showing that auditory motion produces a repulsive MAE in a similar way to visual MAEs. A third experiment interleaved auditory and visual motion, presenting each modality in alternation to test whether sequential adaptation was modality specific. Whether analysing vision preceded by audition, or audition preceded by vision, negative (repulsive) serial dependencies were observed: a slow motion made a subsequent motion seem faster (and vice versa) despite the change of modality. This result shows that the motion adaptation was supramodal as it occurred despite the modality mismatch between adaptor and test. We conclude that a common mechanism processes motion regardless of whether the input is visual or auditory.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only