Abstract
The visual recognition of goal-directed movements is crucial for the understanding of intentions and goals of others as well as for imitation learning. So far, it is largely unknown how visual information about effectors and goal objects of actions is integrated in the brain. Specifically, it is unclear whether a robust recognition of goal-directed actions can be accomplished by purely visual processing or if it requires a reconstruction of the three-dimensional structure of object and effector geometry.
We present a neurophysiologically inspired model for the recognition of goal-directed grasping movements. The model reproduces fundamental properties of action-selective neurons in STS and area F5. The model is based on a hierarchical architecture with neural detectors that reproduce the properties of cells in visual cortex. It contains a novel physiologically plausible mechanism that combines information on object shape and effector (hand) shape and movement, implementing the necessary coordinate transformations from retinal to an object centered frame of reference. The model was evaluated with real video sequences of human grasping movements, using a separate training and test set. The model reproduces a variety of tuning properties that have been observed in electrophysiological experiments for action-selective neurons in STS and area F5.
The model shows that the integration of effector and object information can be accomplished by well-established physiologically plausible principles. Specifically, the proposed model does not compute explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned view-dependent representations for sequences of hand shapes. Our results complement those of existing models for the recognition of goal-directed actions and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.
Supported by DFG (SFB 550), the EC, and Hermann und Lilly Schilling Foundation.