Eye movements play a central role in the efficient completion of almost all goal-directed human activity. When performing everyday tasks, such as making a cup of tea, the eyes are directed sequentially to individual objects (the kettle, faucet, teabags, and so on) immediately before each object is needed (Land & Hayhoe,
2001); task performance depends on a series of visual search operations, each triggered when a new object becomes relevant. This observed coupling of eye movements to task structure demonstrates that oculomotor selection is subject to strong top-down control (Malcolm & Henderson,
2010; Yarbus,
1967) and cannot be driven solely by low-level stimulus salience (Itti & Koch,
2000). Several forms of top-down guidance have been identified (for a review, see Hollingworth,
2012a), including knowledge of the typical locations of objects in scenes (Henderson, Weeks, & Hollingworth,
1999; Neider & Zelinsky,
2006; Torralba, Oliva, Castelhano, & Henderson,
2006), memory for the particular environment in which the search occurs (Brockmole, Castelhano, & Henderson,
2006; Castelhano & Henderson,
2007; Chun & Jiang,
1998; Hollingworth,
2009,
2012b; Võ & Wolfe,
2013), and memory for the visual properties of the currently relevant object, allowing the formation of a target template (Bravo & Farid,
2009; Malcolm & Henderson,
2009; Vickery, King, & Jiang,
2005; Wolfe, Horowitz, Kenner, Hyle, & Vasan,
2004; Yang & Zelinsky,
2009).