Fixation eye movements in visual search tasks are complex and depend on the information collected across the visual field during search, as well as on the observer's prior knowledge of the task and stimuli (for a review, see Findlay & Gilchrist,
2003). For example, (1) If a target is highly visible, then observers tend to make a saccade directly toward the target (Eckstein, Beutter, & Stone,
2001; Findlay,
1997), but under more difficult conditions, observers may fixate some average location within a group of possible target locations (Findlay,
1997; He & Kowler,
1989; Zelinsky, Rao, Hayhoe, & Ballard,
1997); (2) The duration of fixations during visual search tends to increase as the discriminability of the target from background decreases (Hooge & Erkelens,
1999; Jacobs & O'Regan,
1987); (3) The “classification image” technique (Beard & Ahumada,
1998) applied to visual search for targets in noise shows that the eye is attracted (at least some of the time) to features in the noise that match features of the target (Rajashekar, Cormack, & Bovik,
2002). The clear implication of these and other studies (e.g., Engel,
1977; Geisler & Chou,
1995; Motter & Belky,
1998; Zelinsky,
1996) is that if we are to understand multiple-fixation visual search, then we need to understand what information is being extracted from the periphery during search and how it is being used to guide the sequence of eye movements.