July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Learning and optimal inference in a novel spatial localization task
Author Affiliations
  • Vikranth R. Bejjanki
    Center for Visual Science and Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627\nPrinceton Neuroscience Institute, Princeton University, Princeton, NJ 08544
  • David C. Knill
    Center for Visual Science and Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627
  • Richard N. Aslin
    Center for Visual Science and Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627
Journal of Vision July 2013, Vol.13, 746. doi:https://doi.org/10.1167/13.9.746
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vikranth R. Bejjanki, David C. Knill, Richard N. Aslin; Learning and optimal inference in a novel spatial localization task. Journal of Vision 2013;13(9):746. https://doi.org/10.1167/13.9.746.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To estimate the location of a visual target, an ideal observer must combine what is seen (corrupted by sensory noise) with what is known (from prior experience). Bayesian inference provides a principled method for optimally accomplishing this. Here we provide evidence from two experiments that observers combine sensory information with complex prior knowledge in a Bayes-optimal manner, as they estimate the locations of targets on a touch screen. On each trial, the x-y location of a target was drawn from one of two underlying distributions (the "priors") centered on mean positions on the left and right of the display, and with different variances. The observer, however, only saw a cluster of dots normally distributed around that location (the "likelihood"). Across 1200 trials, the variance of the dot cluster was manipulated to provide three levels of reliability for the likelihood. Feedback on observers’ accuracy in each trial was provided post-touch using dots representing the touched location and the true target location on the screen. In Experiment 1, consistent with the Bayesian model, observers not only relied less on the likelihoods (and more on the priors) as the cluster of dots increased in variance, but they also assigned greater weight to the more reliable prior. In Experiment 2, we obtained a direct estimate of observers’ priors by additionally having them localize the target in the absence of any sensory information. We found that within a few hundred trials, observers reliably learned the true means of both prior distributions, but it took them much longer to learn the relative reliabilities of the two prior distributions. In sum, human observers optimized their performance in a novel spatial localization task by learning the relevant environmental statistics (the two different distributions of target locations) and optimally integrating these statistics with sensory information.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×