September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Uncertainty in scene segmentation: Statistically optimal effects on learning visual representations
Author Affiliations
  • József Fiser
    Department of Psychology and the Neuroscience Program, Brandeis University, Waltham, MA 02453, USA
    Volen Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA
  • Gergö Orbán
    Volen Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA
    Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, United Kingdom
  • Máté Lengyel
    Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, United Kingdom
Journal of Vision September 2011, Vol.11, 994. doi:https://doi.org/10.1167/11.11.994
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      József Fiser, Gergö Orbán, Máté Lengyel; Uncertainty in scene segmentation: Statistically optimal effects on learning visual representations. Journal of Vision 2011;11(11):994. https://doi.org/10.1167/11.11.994.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A number of recent psychophysical studies have argued that human behavioral processing of sensory inputs is best captured by probabilistic computations. Due to conflicting cues, real scenes are ambiguous and support multiple hypotheses of scene interpretation, which require handling uncertainty. The effects of this inherent perceptual uncertainty have been well-characterized on immediate perceptual decisions, but the effects on learning (beyond non-specific slowing down) have not been studied. Although it is known that statistically optimal learning requires combining evidence from all alternative hypotheses weighted by their respective certainties, it is still an open question whether humans learn this way. In this study, we tested whether human observers can learn about and make inferences in situations where multiple interpretations compete for each stimulus. We used an unsupervised visual learning paradigm, in which ecologically relevant but conflicting cues gave rise to alternative hypotheses as to how unknown complex multi-shape visual scenes should be segmented. The strength of conflicting segmentation cues, “high-level” statistically learned chunks and “low-level” grouping features of the input based on connectedness, were systematically manipulated in a series of experiments, and human performance was compared to Bayesian model averaging. We found that humans weighted and combined alternative hypotheses of scene description according to their reliability, demonstrating an optimal treatment of uncertainty in learning. These results capture not only the way adults learn to segment new visual scenes, but also the qualitative shift in learning performance from 8-month-old infants to adults. Our results suggest that perceptual learning models based on point estimates, which instead of model averaging evaluate a single hypothesis with the “best explanatory power” only, are not sufficient for characterizing human visual learning of complex sensory inputs.

Swartz Foundation, NIH, Wellcome Trust, EU-FP7-Marie Curie. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×