Abstract
The mapping between current visual information and future motor action is not clear especially when the task demands require predictive strategies, as when intercepting a fast-moving ball. In this study, we collected hand, head, and gaze movement of ten subjects performing a virtual ball catching task in which they were instructed to intercept a parabolically moving virtual ball using a badminton paddle. We forced subjects to use prediction by making the virtual ball disappear for a fixed 500 ms duration occurring 300, 400 or 500 ms before it passed the subject. To investigate perceptual contributions to successful visuo-motor behavior, we created supervised Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model for each subject of different ability as indicated by their catching rate and compared the properties of the models. The model takes egocentric visual information of the ball, head, gaze, and hand position/orientation as input, and produces the next position/orientation of the gaze, head, and hand. Models trained on more successful subjects are able to anticipate the subject's actions more accurately and further in time, suggesting a stronger relationship between temporally distant visual information and motor output. To investigate the relative influence of particular sources of visual information on the motor output, we conducted an ablation study in which at each iteration, one visual feature (such as expansion rate) was removed from the model, and inferred its reliability from the subsequent change in model performance. The model fit to the more successful group's data was more sensitive to the ablation of ball-related visual features. This suggests that the model fit to data from the more successful group had found the temporal mapping between ball visual features and output motor action, rather than heavily relying on a temporal extrapolation of future motor output based on previous timesteps.
Meeting abstract presented at VSS 2018