December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Kinematics predictions in static and perturbed 3D reaching by recurrent neural networks.
Author Affiliations & Notes
  • Annalisa Bosco
    University of Bologna
  • Matteo Filippini
    University of Bologna
  • Davide Borra
    University of Bologna
  • Claudio Galletti
    University of Bologna
  • Patrizia Fattori
    University of Bologna
  • Footnotes
    Acknowledgements  MAIA project has received funding from the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement No 951910.
Journal of Vision December 2022, Vol.22, 3492. doi:https://doi.org/10.1167/jov.22.14.3492
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Annalisa Bosco, Matteo Filippini, Davide Borra, Claudio Galletti, Patrizia Fattori; Kinematics predictions in static and perturbed 3D reaching by recurrent neural networks.. Journal of Vision 2022;22(14):3492. https://doi.org/10.1167/jov.22.14.3492.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Actions are complex cognitive phenomena and can be described at different levels of abstraction, from action intentions to the description of the mechanistic properties of movements. Typically, the kinematic parameters of reach-to-grasp movements are modulated by action intentions. However, when an unexpected change in visual target goal during a reaching execution occurs, it is still unknown whether the action intention changes with target goal modification and which is the temporal structure of the target goal prediction. 23 naïve volunteers (11 males and 12 females, mean age 22.62.3) took part in the study. We characterized the information embedded in the kinematics of reaching movement towards targets located at different directions and depths with respect to the body in a condition where the targets remained static for the entire duration of movement and in a condition where the targets shifted to another position during movement execution. We designed our analysis to perform a temporal decoding of the final goals by a recurrent neural network (RNN) and using the kinematics of pointing finger and wrist of participants. We found that, at average level, a progressive increase of the classification performance from the onset to the end of movement above the defined chance level (16,6%) is visible in both direction and depth dimensions as well as in decoding perturbed visual targets. However, classification accuracies in decoding targets along direction and depth dimension show differences in the maximum accuracy reached by the classifier in the final phase of the movement: it was 0.94 for target direction and 0.77 for target depth. This study represents an additional evidence for establishing how human or artificial agents could take advantage of visual cues extracted from the observed kinematics in order to optimize interaction strategies also in perturbed conditions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×