Purchase this article with an account.
Joseph L. Austerweil, Thomas L. Griffiths; Understanding how people learn the features of objects as Bayesian inference. Journal of Vision 2010;10(7):1107. doi: 10.1167/10.7.1107.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Research in perceptual learning has demonstrated that human feature representations can change with experience (Goldstone, 1998). However, previous computational models for learning feature representations have presupposed the number of features (Goldstone, 2003) or complex basic units are known a priori (Orban et. al., 2008). We propose a nonparametric Bayesian framework that infers feature representations to represent observed stimuli without specifying the number of features a priori from raw sensory information (Austerweil & Griffiths, 2008). This approach captures two main phenomena from the perceptual learning literature: differentiation (Pevtzow & Goldstone, 1994) and unitization (Shiffrin & Lightfoot, 1997). Additionally, our approach makes a novel prediction about how people learn features. It predicts that people should infer the whole objects as features if the parts which compose objects strongly co-vary across objects and the parts as features if the parts are largely independent. In our first experiment, we demonstrated that one group of participants who observed objects whose parts co-varied did not generalize to unseen combinations of those parts (Austerweil & Griffiths, 2009). The other group of participants who observed parts occurring independently did generalize to seen combinations of parts. We demonstrate that the following pre-existing psychological frameworks or models cannot explain these results: exemplar models (Nofosoky, 1986), prototype models (Reed, 1972), changes of concavity (Hoffman & Richards, 1985), and recognition-by-components (Biederman, 1987). This suggests participants were using distributional information to infer features to base their generalization judgments as our model suggests. In a second experiment, we replicate this effect with a set of rendered 3-D objects, showing the effect works in two very different types of objects. As our computational framework suggests, part correlation is an important cue that people use to infer feature representations.
This PDF is available to Subscribers Only