Abstract
To estimate the location of a visual target, an ideal observer must combine what is seen (corrupted by sensory noise) with what is known (from prior experience). Bayesian inference provides a principled method for optimally accomplishing this. Here we provide evidence from two experiments that observers combine sensory information with complex prior knowledge in a Bayes-optimal manner, as they estimate the locations of targets on a touch screen. On each trial, the x-y location of a target was drawn from one of two underlying distributions (the "priors") centered on mean positions on the left and right of the display, and with different variances. The observer, however, only saw a cluster of dots normally distributed around that location (the "likelihood"). Across 1200 trials, the variance of the dot cluster was manipulated to provide three levels of reliability for the likelihood. Feedback on observers’ accuracy in each trial was provided post-touch using dots representing the touched location and the true target location on the screen. In Experiment 1, consistent with the Bayesian model, observers not only relied less on the likelihoods (and more on the priors) as the cluster of dots increased in variance, but they also assigned greater weight to the more reliable prior. In Experiment 2, we obtained a direct estimate of observers’ priors by additionally having them localize the target in the absence of any sensory information. We found that within a few hundred trials, observers reliably learned the true means of both prior distributions, but it took them much longer to learn the relative reliabilities of the two prior distributions. In sum, human observers optimized their performance in a novel spatial localization task by learning the relevant environmental statistics (the two different distributions of target locations) and optimally integrating these statistics with sensory information.
Meeting abstract presented at VSS 2013