Abstract
When the head is free to move, subjects frequently engage in coordinated head and eye movements to bring a target object to the fovea. Freedman and Sparks (1997) found that the relative contribution of head and eye movements to the total gaze shift are a non-linear function of initial eye position and total gaze displacement. Freedman (2001) and Wang et.al. (2002) have recently proposed descriptive mathematical models for the decomposition of total gaze shift into head and eye movements. It is however an open question a) if and how this decomposition can be seen as resulting from an optimality principle, b) if this decomposition strategy is learned during development and c) if so, what learning mechanisms are responsible for its aquisition. We propose a model for the simultaneous learning of the calibration of goal directed head/eye movements and the optimal gaze shift decomposition based on a reinforcment learning mechanism (Schultz et.al., 1997). We show that the rather complex behaviorally observed gaze decomposition can be understood as the result of optimizing a simple cost function. In our model, the cerebellum plays a key role in learning a gaze shift decomposition that accurately brings the desired target to the fovea while at the same time minimizing this cost function. Our model is consistent with the known anatomy and physiology of oculomotor control systems. The model efficiently learns gaze shift decompositions observed experimentally and makes a number of testable predictions. The model is also implemented and tested in an anthropomorphic robot head that autonomously learns the optimal gaze decomposition.
References:
FreedmanEGSparksDL(1997). J. Neurophysiol., 77:2328–2348, FreedmanEG(2001). Biol. Cybern., 84:453–462, SchultzWDayanPMontaguePR(1997). Science, 275:1593–1599
WangXJinJJabriM(2002). Neural Networks, 15:811–832