Abstract
Abstract: A growing amount of evidence indicates that the brain uses eye-centered coordinates to store target locations. Accordingly, to remember target locations across eye movements their retinal stimulation sites must be remapped (updated) to the new retinal locations. Furthermore, a recent study has shown that this remapping process correctly accounts for the non-commutative aspects of 3-D eye rotations during saccades with the head fixed. However, during gaze shifts with the head free to move naturally this retinotopic updating process must be considerably more complex. Clearly, the contribution of 3-D head rotation to the gaze shift must be accounted for. Moreover, such head movements also cause the eye to translate in space which should be taken into account, depending on target distance. To explore the ramifications of this for the neurophysiology of retinotopic updating during head-free gaze shifts, we modeled the spatial updating processes for updating visual space across 3-D head movements by accounting for the 3-D and non-commutative aspects of eye and head rotations as well as being concerned with target depth and translational information. In the model, internal estimates of target locations relative to the eye are instantaneously updated using physiologically realistic feedback signals coding eye and head motion as well as target depth. The model predicts correct updating for horizontal, vertical, torsional rotations of eyes and head. The simulations also show correct translational updating which has the remarkable implication that targets, depending on their depth, shift at different speeds along the retina.