September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Predicting action type from visual perception: a kinematic study.
Author Affiliations & Notes
  • Annalisa Bosco
    Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
    Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI)
  • Elena Aggius Vella
    Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
  • Patrizia Fattori
    Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
    Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI)
  • Footnotes
    Acknowledgements  MAIA project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 951910; work supported by Ministry of University and Research, PRIN2020-20208RB4N9 and by National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006).
Journal of Vision September 2024, Vol.24, 516. doi:https://doi.org/10.1167/jov.24.10.516
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Annalisa Bosco, Elena Aggius Vella, Patrizia Fattori; Predicting action type from visual perception: a kinematic study.. Journal of Vision 2024;24(10):516. https://doi.org/10.1167/jov.24.10.516.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The planning of a movement toward an object influences the visual perception of the object properties relevant for the action. This suggests a bidirectional interaction between the motor and the visual systems. In the present study, we investigate whether this interaction can be decoded even during the visual estimation of the object properties before the onset of the movement. To this aim, we tested 15 healthy right-handed participants (males=5, females=10; mean age=21.12) in a task consisting of two subsequent phases: 1) a perceptual phase, in which the participants manually estimated the size and orientation of a visual stimulus by extending the index and thumb and, simultaneously, rotating the grip and 2) an action phase, in which participants performed a grasping or a reaching movement (according to the instruction given at the trial onset) towards the same stimulus. A motion capture system recorded the participant’s hand position and movement. In order to test if the action type can be predicted during the estimation phase, i.e. if the type of action requested influences the object estimation, we applied a Random Forest classification model to the perceptual phase. The size and orientation estimations, and the velocity of index and thumb (calculated during the perceptual phase) were used as predictors. We found that the model accuracy in classifying the reaching and grasping was on average 99% for the testing dataset. The corresponding sensitivity (ability in classifying true positives) and specificity (ability in classifying true negatives) of the model were 99,5% and 100%, respectively. The most informative predictor was the orientation estimation that contributed for the 99,94%, followed by the size estimation: 78.02% and the index and thumb velocities: 1.2% and 0.6%, respectively. These results suggest that action-based perceptual information can be optimally used to extract action intentions well before the onset of the movement.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×