Abstract
The integration of multisensory information depends upon spatial and temporal coincidence, signal strength, and semantics (Meredith & Stein, 1983; Welch & Warren, 1986). Sounds also aid visual detection. Visual sensitivity to human actions improves when paired with sounds that are both semantically and temporally congruent (e.g. Arrighi et al., 2009). Two studies investigated the roles of meaning, timing, and visual signal strength in visual sensitivity to human actions. Experiment 1 tested whether temporal synchrony is necessary for meaningful sounds to impact visual sensitivity. Participants performed a point-light walker detection task with sounds that were meaningful (footsteps) or neutral (tones) and synchronous or asynchronous with point-light footfalls. Results revealed a main effect of sounds, no effect of synchrony, and no interaction. Sensitivity with both coincident and random footsteps was significantly greater than sensitivity with temporally coincident tones or temporally random tones. This suggests that audiovisual action priming occurs at the level of meaning and that sounds can enhance visual sensitivity in the absence of temporal coincidence (e.g. Schneider et al., 2008). Experiment 2 investigated whether signal strength moderates the effect of meaningful sounds on the priming of visual actions, as predicted by the multisensory rule of inverse effectiveness (IE) (e.g. Collignon et al., 2008). Participants detected a point-light walker in a mask of varying densities that rendered detection more or less difficult. Results revealed a main effect of sounds, but no interaction between sounds and mask density; footsteps improved sensitivity across all levels of mask density. However, when the data were analyzed according to walker detection accuracy in silent displays (Thomas & Shiffrar, 2011), an interaction emerged, such that footstep sounds improved sensitivity for visually difficult movies. These results agree with the IE rule. When an action is difficult to perceive, a related sound can facilitate visual detection.
Meeting abstract presented at VSS 2012