September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Investigating the Differences in Predictive Oculomotor Strategies using Long Short-Term Memory Recurrent Neural Network Models
Author Affiliations
  • Kamran Binaee
    Rochester Institute of Technology
  • Rakshit Kothari
    Rochester Institute of Technology
  • Jeff Pelz
    Rochester Institute of Technology
  • Gabriel Diaz
    Rochester Institute of Technology
Journal of Vision September 2018, Vol.18, 184. doi:https://doi.org/10.1167/18.10.184
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kamran Binaee, Rakshit Kothari, Jeff Pelz, Gabriel Diaz; Investigating the Differences in Predictive Oculomotor Strategies using Long Short-Term Memory Recurrent Neural Network Models. Journal of Vision 2018;18(10):184. https://doi.org/10.1167/18.10.184.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The mapping between current visual information and future motor action is not clear especially when the task demands require predictive strategies, as when intercepting a fast-moving ball. In this study, we collected hand, head, and gaze movement of ten subjects performing a virtual ball catching task in which they were instructed to intercept a parabolically moving virtual ball using a badminton paddle. We forced subjects to use prediction by making the virtual ball disappear for a fixed 500 ms duration occurring 300, 400 or 500 ms before it passed the subject. To investigate perceptual contributions to successful visuo-motor behavior, we created supervised Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model for each subject of different ability as indicated by their catching rate and compared the properties of the models. The model takes egocentric visual information of the ball, head, gaze, and hand position/orientation as input, and produces the next position/orientation of the gaze, head, and hand. Models trained on more successful subjects are able to anticipate the subject's actions more accurately and further in time, suggesting a stronger relationship between temporally distant visual information and motor output. To investigate the relative influence of particular sources of visual information on the motor output, we conducted an ablation study in which at each iteration, one visual feature (such as expansion rate) was removed from the model, and inferred its reliability from the subsequent change in model performance. The model fit to the more successful group's data was more sensitive to the ablation of ball-related visual features. This suggests that the model fit to data from the more successful group had found the temporal mapping between ball visual features and output motor action, rather than heavily relying on a temporal extrapolation of future motor output based on previous timesteps.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×