To understand how neural saccade signals account for initial eye position when converting visual signals to motor commands, and non-commutativity of rotations when updating locations of targets across saccades, we trained three-layer neural networks using back-propagation learning and physiologically and geometrically realistic inputs and outputs to perform the target position updating associated with the two-saccade paradigm, to within a 1° accuracy. All networks were supplied with a visual target (T2) input, a 3-D efference copy of the first saccade to target T1, and a 3-D copy of eye position. In the first network, both input and output layers topographically encoded target location in retinal coordinates, the output being the updated position of T2 after the first saccade. The hidden-layer visual receptive fields showed a bifurcation into “on” and “off” regions, with maximal activation in the former and minimal in the latter, modulated by the first saccade through “shift fields” that displaced the boundary between the two regions in line with the first saccade vector. The second network's output was a topographical representation of the 2-D second-saccade motor error in head coordinates, requiring an additional reference frame transformation. This network solved the task similarly to the first but with more significant contributions from initial eye position. The third network had a vectorial representation of the 3-D motor error in “brainstem coordinates” as output, requiring a further spatial-to-rate-code transformation. Its hidden-layer units showed broadly-tuned, gradually varying receptive fields whose magnitudes were modified by eye position and first saccade inputs in a classic gain-field-like manner. These findings show that the brain can potentially use several different mechanisms for target updating and reference frame transformation during saccades. The actual neural mechanism may resemble a hybrid of what we observed in networks 1 and 2.
Funding provided by CIHR, NSERC and OGS Canada