Purchase this article with an account.
Lukasz Piwek, Karin Petrini, Frank Pollick; Multimodal integration of the auditory and visual signals in dyadic point-light interactions. Journal of Vision 2010;10(7):788. doi: 10.1167/10.7.788.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Multimodal aspects of non-verbal communication have thus far been examined using displays of a solitary character (e.g. the face-voice and/or body-sound of one actor). We extend investigation to more socially complex dyadic displays using point-light displays combined with speech sounds that preserve only prosody information. Two actors were recorded approaching each other with three different intentions: negative, positive and neutral. The actors' movement was recorded using a Vicon motion capture system. The speech was simultaneously recorded and subsequently processed with low-pass filtering to obtain an audio signal that contained prosody information but not intelligible speech. In Experiment 1, displays were presented bimodally (audiovisual) and unimodally (audio-only and visual-only) to examine whether bimodal audiovisual conditions would facilitate perception of the original social intention, compared to the unimodal conditions. In Experiment 2, congruent (visual and audio signal from same actor and intent) and incongruent displays (visual and audio signal from different actor and intent) were used to explore changes in social perception when the sensory signals gave discordant information. Results supported previous findings obtained with solitary characters: the visual signal dominates over the auditory signal (however, auditory information can influence the visual signal when the intentions from both modalities are discordant). Results also showed that this dominance of visual over auditory is significant only when the interaction between characters is perceived as socially meaningful i.e. when positive or negative intentions are present.
This PDF is available to Subscribers Only