Abstract
At the recent Psychonomics meeting, I introduced a computational model capable of describing the spatial coordinates of eye movements made during visual search. This model uses filter-based image processing techniques to represent real-world targets and search displays, then compares these representations to derive a search saliency map. The target of a simulated saccade is determined by the weighted centroid of activity on this map, with this centroid changing over time as a moving threshold removes those saliency map points offering the least evidence for the target. As a result of this moving threshold pruning points from the saliency map, a sequence of eye movements are produced that bring simulated gaze to the map's “hotspot”. Saccade programming is further constrained by a simulated fovea that retinally transforms the search display as the model's “eye” converges on the target. The model terminates with a target present response if any point on the saliency map exceeds a target present threshold. The model ends with a target-absent response if all saliency map activity is removed by the moving threshold without a target being detected. Current work extends this model to include an account of individual fixation durations. Temporal dynamics are produced by preventing the model from making an eye movement until a criterion distance is reached between the current fixation point and the centroid. Fixation duration is defined by the number of threshold movements needed to achieve this criterion distance. I test these model assumptions by monitoring the eye movements of observers viewing the same search displays input to the model, then comparing the simulated sequences of saccades and fixations to the behavioral data. Preliminary findings reveal considerable spatio-temporal agreement between these gaze patterns, both at an aggregate level (e.g., general tradeoffs between saccade latency and accuracy) as well as in the behavior of individual observers.