September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Closed-loop vs predictive control characterized by inverse reinforcement learning of visuomotor behavior during target interception
Author Affiliations & Notes
  • Kamran Binaee
    Rochester Institute of Technology
  • Rakshit S Kothari
    Rochester Institute of Technology
  • Gabriel J Diaz
    Rochester Institute of Technology
Journal of Vision September 2019, Vol.19, 276a. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kamran Binaee, Rakshit S Kothari, Gabriel J Diaz; Closed-loop vs predictive control characterized by inverse reinforcement learning of visuomotor behavior during target interception. Journal of Vision 2019;19(10):276a.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

An attempt to intercept a quickly moving object in flight is preceded adjustments to gaze and hand position that can be modeled as an online coupling to visual sources of information about the target’s trajectory. However, accurate tracking also requires short-term predictive mechanisms to compensate for visuomotor delay and brief occlusions. In this study, we ask what factors might contribute to the transition from online to predictive control strategies in visually guided actions. Subjects were immersed in a virtual reality ball catching simulation. To vary spatiotemporal constraints on visual tracking and the manual interception, balls approached along parabolic trajectories that varied in approach speed (fast vs slow). To investigate the accuracy of early vs late estimates, an occluder was placed along either the early or late portion of the ball trajectory (early, late, or no occlusion). All 23 subjects missed the ball most often in the early occlusion condition and the fast ball trajectory. Visual tracking and hand positioning varied systematically with both occlusion timing and temporal demands of the task. Although online and predictive control are often characterized as two separate modes of control, there are also intermediate/hybrid solutions. To explore these intermediate modes of control, data were used to train an inverse reinforcement learning (RL) model that captures the full spectrum of strategies from closed-loop to predictive control. Independent submodules characterized the position and velocity of the gaze vector relative to the ball. A comparison of time-varying recovered reward values between occlusion trials and no-occlusion conditions revealed a transition between online and predictive control strategies within a single trial. This suggests an RL based model for prediction vs online control characterized by independent oculomotor reward functions.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.