Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Bayesian inference of perceptual uncertainty, behavioral costs, and prior beliefs for continuous perception-action tasks
Author Affiliations & Notes
  • Tobias F. Niehues
    Institute of Psychology, Technische Universität Darmstadt
    Centre for Cognitive Science, Technische Universität Darmstadt
  • Dominik Straub
    Institute of Psychology, Technische Universität Darmstadt
    Centre for Cognitive Science, Technische Universität Darmstadt
  • Constantin A. Rothkopf
    Institute of Psychology, Technische Universität Darmstadt
    Centre for Cognitive Science, Technische Universität Darmstadt
  • Footnotes
    Acknowledgements  This research was supported by the European Research Council (ERC; Consolidator Award 'ACTOR'-project number ERC-CoG-101045783).
Journal of Vision September 2024, Vol.24, 376. doi:https://doi.org/10.1167/jov.24.10.376
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tobias F. Niehues, Dominik Straub, Constantin A. Rothkopf; Bayesian inference of perceptual uncertainty, behavioral costs, and prior beliefs for continuous perception-action tasks. Journal of Vision 2024;24(10):376. https://doi.org/10.1167/jov.24.10.376.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Bayesian observer and actor models have provided normative explanations for behavior in many perception-action tasks including discrimination tasks, cue combination, and sensorimotor control by attributing behavioral variability and biases to factors such as perceptual and motor uncertainty, prior beliefs, and behavioral costs. However, it is unclear how to extend these models to more complex tasks such as continuous production and reproduction tasks, because inferring behavioral parameters is often difficult due to analytical intractability. Here, we overcome this limitation by approximating Bayesian actor models using neural networks. Because Bayesian actor models are analytically tractable only for a very limited set of probabil- ity distributions, e.g. Gaussians, and cost functions, e.g. quadratic, one typically uses numerical methods. This makes inference of their parameters computationally difficult. To address this, we approximate the optimal actor using a neural network trained on a wide range of different parameter settings. The pre-trained neural network is then used to efficiently perform sampling-based inference of the Bayesian actor model’s parameters with performance gains of up to three orders of magnitude compared to numerical solution methods. We validated our proposed method on synthetic data, showing that recovery of sensorimotor parameters is feasible. Importantly, individual behavioral differences can be attributed to differences in perceptual uncertainty, motor variability, and internal costs. We finally analyzed real data from a task in which participants had to throw beanbags towards targets at different distances. Behaviorally, subjects differed in how strongly they undershot and overshot different targets and whether they showed a regression to the mean over trials. We could attribute these complex behavioral patterns to changes in priors because of learning and undershoots and overshoots to behavioral costs and motor variability. Taken together, we present an analysis method applicable to continuous production and reproduction tasks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×