September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Modeling Hand-Eye Movements in a Virtual Ball Catching Setup using Deep Recurrent Neural Network
Author Affiliations
  • Kamran Binaee
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
  • Anna Starynska
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
  • Rakshit Kothari
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
  • Christopher Kanan
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
  • Jeff Pelz
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
  • Gabriel Diaz
    Rochester Institute of Technology, Chester F. Carlson Center for Imaging Science
Journal of Vision August 2017, Vol.17, 17. doi:https://doi.org/10.1167/17.10.17
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kamran Binaee, Anna Starynska, Rakshit Kothari, Christopher Kanan, Jeff Pelz, Gabriel Diaz; Modeling Hand-Eye Movements in a Virtual Ball Catching Setup using Deep Recurrent Neural Network. Journal of Vision 2017;17(10):17. https://doi.org/10.1167/17.10.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies show that humans efficiently formulate predictive strategies to make accurate eye/hand movements when intercepting with a target moving in their field of view, such as a ball in flight. Nevertheless, it is not clear how these strategies compensate for noisy sensory input and how long these strategies are valid in time, as when a ball is occluded mid-flight prior to an attempted catch. To investigate, we used a Virtual Reality ball catching paradigm to record the 3D gaze of ten subjects as well as their head and hand movements. Subjects were instructed to intercept a virtual ball in flight while wearing a head mounted display which was being tracked using motion capture system. Midway through its parabolic trajectory, the ball was made invisible for a blank duration of 500 ms. We created 9 different ball trajectories by choosing three pre-blank (300, 400, 500 ms) and three post-blank durations (600, 800, 1000 ms). The ball launch position and angle were randomized. During the blank, average angular displacement of the ball was 11 degrees of visual angle. In this period subjects were able to track the ball successfully using head+eye pursuit. In success trials, subjects have higher smooth pursuit gain values during the blank, combined with a sequence of saccades in the direction of ball trajectory toward the end of the trial. Approximately 200 ms before the catching frame, angular gaze-ball tracking error in elevation, forecasts subject's success or failure. We used this dataset to train a deep Recurrent Neural Network (RNN) that models human hand-eye movements. By using previous input sequences, the RNN model predicts the angular gaze vector and hand position for a short duration into the future. Consistent with studies of human behavior, the proposed model accuracy decreases when we extend the prediction window beyond 120 ms.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×