Abstract
Action recognition and execution are tightly linked, and action execution modulates by action recognition and vice versa. The involvement of dynamic motor representations in action recognition would predict a critical role of the exact temporal and spatial matching between executed and recognized action. METHOD:In a VR-setup point-light stimuli of waving actions embedded in a scrambled mask were presented to observers. The motion of the presented point-light stimulus was controlled in real-time by motion-captured movements of the participants. Recognition performance was measured by threshold determination varying the number of masking dots. An additional control condition required the detection of the visual stimulus without concurrent motor task. The relationship between visually perceived and executed movement was modified introducing temporal delays (280 and 560ms) between both movements (experiment 1), and by changing their spatial congruency, showing the observer's arm or its mirrored representation without time delay (experiment 2).
RESULTS:Compared to the control condition without motor execution, biological motion detection was significantly improved (facilitation) when observed and executed motions were in temporal synchrony compared to the conditions with time delays. Performance deteriorated with increasing delays (F1,14=10.66; pinterference). Comparison between the identical and the mirror-reflected arm revealed a critical influence of the spatial congruence between the presented and the executing arm (F1,13=13.01; p=0.003). Facilitation occurred only in the spatially congruent condition.
CONCLUSIONS: Timing and spatial matching between observed and self-generated movements are critical factors for action recognition. Simple explanations, such as the matching of rhythms between both actions, could be ruled out regarding the lack of recognition modulation in the mirrored-arm condition. Present experiments aim to clarify the different functional roles of multi-modal congruency (visual and acoustic), proprioceptive information and dynamic internal motor presentations on action-perception.
This project was supported by the fortüne program, the EU (FP6) project COBOL and the DFG.