September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Modeling associative motor learning through capacity-limited reinforcement learning
Author Affiliations
  • Rachel Lerch
    University of Texas at Austin
  • Chris R. Sims
Journal of Vision September 2021, Vol.21, 2782. doi:https://doi.org/10.1167/jov.21.9.2782
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rachel Lerch, Chris R. Sims; Modeling associative motor learning through capacity-limited reinforcement learning. Journal of Vision 2021;21(9):2782. https://doi.org/10.1167/jov.21.9.2782.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perception fundamentally supports the achievement of behavioral objectives, however human perceptual processing, like any physical information channel, is capacity-limited. These processing limits influence decision outcomes at multiple levels, as the response selection necessary for intelligent behavior is also constrained at motor, perceptual, and cognitive levels. When task demands are high, such as when the appropriate action(s) are associated with multiple visual cues, perceptual and cognitive constraints (e.g. working memory) are especially important parameters for performance. In such circumstances, the individual must balance a trade-off between the computational goals of achieving good performance and minimizing the complexity of the behavioral policy — the mapping between perceptual cues and actions. In the present work we examine how people balance these trade-offs in a motor learning paradigm. The experiment required participants to learn a mapping between visual cues and simple motor responses where pushing a target with the appropriate amount of contact force earned points. In addition, the task was designed to manipulate information processing demands by varying the number of stimulus-action pairings (set-size). In general, performance increased monotonically with policy complexity, and was lower in larger set size conditions. We utilize the formal mathematics of rate-distortion theory, a branch of information theory (Shannon, 1948) to develop a model of optimal task performance subject to information processing constraints that compliments the empirical results. Secondly, we also extend the rate-distortion objective to the reinforcement learning (RL) framework. This approach treats the agent’s behavioral policy as a capacity-limited information channel that is unable to represent cue-action mappings with perfect fidelity. When compared to standard RL models, the capacity-limited RL model was able to capture the qualitative differences between conditions. Together this work highlights the importance of methods that consider resource constraints in modeling performance.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×