June 2007
Volume 7, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   June 2007
A reinforcement learning model of visually guided braking
Author Affiliations
  • Chris R. Sims
    Department of Cognitive Science, Rensselaer Polytechnic Institute
  • Brett R. Fajen
    Department of Cognitive Science, Rensselaer Polytechnic Institute
Journal of Vision June 2007, Vol.7, 151. doi:https://doi.org/10.1167/7.9.151
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chris R. Sims, Brett R. Fajen; A reinforcement learning model of visually guided braking. Journal of Vision 2007;7(9):151. doi: https://doi.org/10.1167/7.9.151.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Models of continuously controlled visually guided action consist of laws of control that describe how informational variables map onto action variables. These models suffer from at least three problems. First, they are far too rigid to capture the flexibility that humans exhibit when adapting to changes in the environment, the dynamics of the controlled system, and costs associated with making different kinds of errors. Second, existing models tend to ignore the inherent limitations of human perceptual and motor systems. Third, there is no compelling account of how laws of control are learned through experience. Reinforcement learning (RL) provides a potentially powerful framework for developing models of VGA that address the weaknesses of existing models. We developed a RL model of visually guided braking that simulates how an agent might learn a behavioral policy that maximizes performance in terms of stopping within a small radius of a target. While RL is widely used for optimal behavior in discrete tasks, a significant obstacle posed by visually guided action is the continuous state and action spaces. Our model represents continuous perceptual input (distance-to-target and velocity) and motor output (brake pressure) using tile coding for function approximation. This feature enables the model to achieve near-optimal task performance while greatly speeding learning, which occurs using the Q-learning update rule. Further, our RL model is designed to explore biologically realistic limitations on performance (e.g., perceptual noise, stimulus discriminability thresholds, and motor variability), as well as variations in reward structure. In contrast to the potentially arbitrary constraints of control law models, reinforcement learning optimally adapts behavior only to the constraints of the model's physical embodiment and the reward structure of the task. The model will be evaluated by comparing simulated data with data from experiments with human subjects performing a simulated braking task.

Sims, C. R. Fajen, B. R. (2007). A reinforcement learning model of visually guided braking [Abstract]. Journal of Vision, 7(9):151, 151a, http://journalofvision.org/7/9/151/, doi:10.1167/7.9.151. [CrossRef]
 NSF 0236734

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.