Abstract
Models of saccade programming in the superior colliculus (SC) have been limited in what inputs they can accept, typically isolated coordinates in visual space (e.g., Ottes et al., 1986, Vision Research [1]). We addressed this problem by integrating [1] with an image-based model of visual search (Zelinsky, 2008, Psychological Review). This new model inputs an image, which it correlates with target features to create a map of target evidence in visual space. Using equations from [1] it then projects this distribution of visual activity onto a map of the collicular surface, where each neurons activity is a Gaussian-weighted sum of its inputs. We then compute across the SC a Gaussian-weighted average of the population activity within an averaging window. The location of the maximum averaged activation determines the landing position of the saccade in visual space following the inverse transformation from [1]. We tested this model against human behavioral data from a saccade-targeting task (Casteau & Vitu, 2009, ECEM15; 2011, ECEM16 [2]) where the separation between the target and a less-eccentric distractor, displayed at variable eccentricities, was systematically varied. These data showed that the likelihood of saccades landing at intermediate target/distractor locations decreased as inter-stimulus distance increased, and that this averaging distance increased with eccentricity in visual space but remained relatively constant when expressed in collicular space. By varying the sigma of our models averaging window, we were able to fit the data from [2], reproducing their evidence for saccade averaging at small target/distractor separations, a breakdown of averaging at large target/distractor separations (saccades directed to one or the other item), and an interaction between inter-stimulus distance and item eccentricity. This demonstrates the efficacy of our model, but its main contribution lies in the fact that predictions of saccade programming, and the collicular activation underlying this programming, can now be made for arbitrarily-complex objects and scenes.
Meeting abstract presented at VSS 2014