Abstract
Prediction and extrapolation form key problems in many perceptual tasks, as exemplified by tracking object motion with occlusion: an object moves along a variable path before disappearing and a prediction of where the object will reemerge at a specified distance beyond the point of occlusion is made. In general, predicting the trajectory of an object during occlusion requires an internal model of the object's motion to extrapolate future positions given the observed trajectory. In recent work (Fulvio, Maloney & Schrater, VSS2009), we showed that people naturally adopt one of two kinds of generic motion extrapolation models in the absence of feedback (i.e. no learning)- a constant acceleration model (producing quadratic extrapolation) or a constant velocity model (producing linear extrapolation). How such predictive models are learned is an open question. To address this question, we had subjects extrapolate the motion of a swarm of sample points generated by random walks from two different families of dynamics - one periodic and one quadratic. For both motion models, the ideal observer is a Kalman filter, and we compute normative learning predictions via a Bayesian ideal learner. Simulation results from the ideal learner predict that learning motion models will depend on several factors, including differential predictions of the motion models, consistency of the motion type across trials and limited noise. To test these predictions, subjects performed a motion extrapolation task that involved positioning a “bucket” with a mouse to capture the object as it emerged from occlusion, and feedback was given at the end of each trial. While subject performance was less than ideal, we provide clear evidence that they adapt their internal motion models toward the generative process in a manner consistent with statistical learning.