Abstract
Actions are complex cognitive phenomena and can be described at different levels of abstraction, from action intentions to the description of the mechanistic properties of movements. Typically, the kinematic parameters of reach-to-grasp movements are modulated by action intentions. However, when an unexpected change in visual target goal during a reaching execution occurs, it is still unknown whether the action intention changes with target goal modification and which is the temporal structure of the target goal prediction. 23 naïve volunteers (11 males and 12 females, mean age 22.62.3) took part in the study. We characterized the information embedded in the kinematics of reaching movement towards targets located at different directions and depths with respect to the body in a condition where the targets remained static for the entire duration of movement and in a condition where the targets shifted to another position during movement execution. We designed our analysis to perform a temporal decoding of the final goals by a recurrent neural network (RNN) and using the kinematics of pointing finger and wrist of participants. We found that, at average level, a progressive increase of the classification performance from the onset to the end of movement above the defined chance level (16,6%) is visible in both direction and depth dimensions as well as in decoding perturbed visual targets. However, classification accuracies in decoding targets along direction and depth dimension show differences in the maximum accuracy reached by the classifier in the final phase of the movement: it was 0.94 for target direction and 0.77 for target depth. This study represents an additional evidence for establishing how human or artificial agents could take advantage of visual cues extracted from the observed kinematics in order to optimize interaction strategies also in perturbed conditions.