Abstract
Most models of eye-movement control during reading assume that saccadic behavior primarily reflects ongoing word-identification processing. Here we show, in contradiction with this view, that an image-based model of saccade programming in the superior colliculus (SC) can predict the highly stereotyped saccadic behavior observed during reading, simply by averaging early visual signals. Twenty-nine French-native speakers read 316 French sentences presented one at a time on a computer screen, while their eye movements were recorded. Images of the sentences were input to the model. Like participants, the model initially fixated the beginning of each sentence. On each fixation, it first performed gaze-contingent blurring of the sentence to reflect visual acuity limitations. A luminance-contrast saliency map was then computed on the blurred image and projected onto the fovea-magnified space of the SC, where neural population activity was averaged first over the visual map and then over the motor map. Averaging over the most active motor population determined the subsequent saccade vector. The new fixation location was in turn inhibited to prevent later oculomotor return. Results showed that the model, like participants, mainly made left-to-right, forward saccades, with just a few (21% and 20% respectively) regressive saccades. The model also successfully captured benchmark, and here-replicated, word-based eye-movement patterns: a greater likelihood to skip shorter and nearer words, a preferred landing position near the centers of words, a linear relationship between a saccade's launch site and its landing site, a greater likelihood to refixate a word when the initial fixation deviated from the word's center, and more regressions following word skipping. Thus, eye movements during reading primarily reflect fundamental visuo-motor principles rather than ongoing language-related processes. The proof is that a model of the SC, which treats sentences as a meaningless visual stimulus, reproduces readers' eye-movement patterns, despite being unable to recognize words!
Meeting abstract presented at VSS 2016