Full comprehension of a complex visual scene requires scanning it with multiple eye movements which allow extraction of behaviorally important details using the high-resolution fovea. In his seminal study, Yarbus has shown that observers direct their gaze towards the most important aspects in a visual scene, and that these fixation targets change according to task demands (Yarbus,
1967). This type of behavior makes sense when considering it from an evolutionary perspective as for example, it allows better assessment of the condition of potential prey or predators.
Both explicit and implicit knowledge could be used by the gaze control mechanisms. But while we can clearly direct our gaze according to an explicit instruction, we are generally unaware of the fact that we constantly shift our gaze at about three times a second. Therefore, it is reasonable that gaze selection may typically rely on implicit rather than explicit representations of the visual world, especially since an online explicit visual representation of the world is likely to be fragmentary and short lived (Chun & Nakayama,
2000; Hayhoe, Shrivastava, Mruczek, & Pelzm,
2003; Neisser,
1967). Surprisingly little is known about the nature of these implicit representations. Several studies investigating visual search proposed a set of “implicit memory” mechanisms which are suggested to guide attention to ensure its efficient deployment (Chun & Nakayama,
2000). Such mechanisms are not necessarily under conscious control, nor does the observer need to have explicit access to the underlying content of the visual representations. For example, a study of priming effects in pop-out search showed that presentation of a target in one trial, automatically draws attention towards its features in the following trial, without effortful and conscious decision making (Maljkovic & Nakayama,
1994,
1996). Other studies have shown that implicit information about the layout of prior scenes may also guide attention, an effect termed contextual cueing (Chun & Jiang,
1998). Finally, it has been demonstrated that a briefly presented masked word (matching the target object), facilitates later change detection of the same target (Walter & Dassonville,
2005).
These studies, however, provided only indirect evidence for the guidance of attention as they only measured manual response time. Here, we utilize the fact that in natural viewing conditions, eye movements and attention are tightly coupled (Deubel & Schneider,
1996; Henderson,
2003), therefore, gaze position is a more direct measure of attention deployment.
We designed an experiment in which eye-movements were measured during a “change detection” search task. Other studies have used similar designs to investigate the relation between the exact eye position and successful detection (Henderson, Brockmole, & Gajewski,
2008; Hollingworth & Henderson,
2002; Hollingworth, Schrock, & Henderson,
2001; Hollingworth, Williams, & Henderson,
2001). In contrast to these studies, our goals were two-fold: first, to assess the degree to which the scanning pattern, prior to detection, is influenced by implicit priming; second, to find out what levels of representation are accessible to the gaze control processes.