Purchase this article with an account.
Stephan de la Rosa, Ylva Ferstl, Heinrich Bülthoff; Does the motor system contribute to action recognition in social interactions? . Journal of Vision 2016;16(12):268. doi: https://doi.org/10.1167/16.12.268.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
It has been suggested that the motor system is essential for various social cognitive functions including the perception of actions in social interactions. Typically, the influence of the motor system on action recognition has been addressed in studies in which participants are merely action observers. This is in stark contrast to real social interactions in which humans often execute and observe actions at the same time. To overcome this discrepancy, we investigated the contribution of the motor system to action recognition when participants concurrently observed and executed actions. As a control, participants also observed and executed actions separately (i.e. not concurrently). Specifically, we probed the sensitivity of action recognition mechanisms to motor action information in both unimodal and bimodal motor-visual adaptation conditions. We found that unimodal visual adaptation to an action changed the percept of a subsequently presented ambiguous action away from the adapted action (adaptation aftereffect). We found a similar adaptation aftereffect in the unimodal non-visual motor adaptation condition confirming that also motor action information contributes to action recognition. However, in bimodal adaptation conditions, in which participants executed and observed actions at the same time, adaptation aftereffects were governed by the visual but not motor action information. Our results demonstrate that the contribution of the motor system to action recognition is small in conditions of simultaneous action observation and execution. Because humans often concurrently execute and observe actions in social interactions, our results suggest that action recognition in social interaction is mainly based on visual action information.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only