Abstract
Introduction: Point-light biological motion figures appear to have less coherent form and may be difficult to perceive when inverted (Johansson, 1973, Percept Psychophys; Sumi, 1984; Perception). A recent study reported that observers perform a motion coherence task better when the local motions are embedded in an upright compared to an inverted point-light walker suggesting that global form may act as a reference frame for motion perception (Tadin et al 2002, Nat Neurosci). Method: We wanted to test if such an enhancement would be found for an audiovisual task. Observers viewed upright or inverted biological motion moving at different speeds. In each trial they also heard periodic auditory stimuli (binaurally presented beeps). The task was to judge whether the temporal frequency of the two stimuli were matched (i.e., if the walking and the sound were synchronous). Results: Subjects performed the audiovisual synchrony detection task much more accurately when the visual stimuli constituted upright rather than inverted walkers, even though the local motions and the sounds were identical in the two conditions. These data show that global form effects are not limited to motion or even just visual tasks and that form can enhance performance in crossmodal tasks. While the effect could be due to the enhancement of perception of motion and consequent better matching of sound to the motion, it may also have a higher-level attentional source. In the real world, motion is very often accompanied by sound and by virtue of using this more ecologically valid task we may be tapping into object-based or crossmodal attentional mechanisms. Further experiments are needed to tease apart these factors. We discuss the results in the framework of attention-mediated high-level motion patterns or ‘sprites’ (Cavanagh et al 2001, Cognition) and suggest that in certain cases such representations may have auditory components or they may enhance performance in associated crossmodal tasks.
We thank Elizabeth Bates and Marty Sereno. This research was supported by NSF BCS 0224321 to M.I. Sereno, and NSF Career grant 0133996 to V.R. de Sa.