Abstract
The planning of a movement toward an object influences the visual perception of the object properties relevant for the action. This suggests a bidirectional interaction between the motor and the visual systems. In the present study, we investigate whether this interaction can be decoded even during the visual estimation of the object properties before the onset of the movement. To this aim, we tested 15 healthy right-handed participants (males=5, females=10; mean age=21.12) in a task consisting of two subsequent phases: 1) a perceptual phase, in which the participants manually estimated the size and orientation of a visual stimulus by extending the index and thumb and, simultaneously, rotating the grip and 2) an action phase, in which participants performed a grasping or a reaching movement (according to the instruction given at the trial onset) towards the same stimulus. A motion capture system recorded the participant’s hand position and movement. In order to test if the action type can be predicted during the estimation phase, i.e. if the type of action requested influences the object estimation, we applied a Random Forest classification model to the perceptual phase. The size and orientation estimations, and the velocity of index and thumb (calculated during the perceptual phase) were used as predictors. We found that the model accuracy in classifying the reaching and grasping was on average 99% for the testing dataset. The corresponding sensitivity (ability in classifying true positives) and specificity (ability in classifying true negatives) of the model were 99,5% and 100%, respectively. The most informative predictor was the orientation estimation that contributed for the 99,94%, followed by the size estimation: 78.02% and the index and thumb velocities: 1.2% and 0.6%, respectively. These results suggest that action-based perceptual information can be optimally used to extract action intentions well before the onset of the movement.