Visual search typically involves attention (and gaze) being serially
guided to locations having the greatest match between the visual features from the search display and the top-down features from a target template (the features of the search target held in memory after they are either provided by a cue or preview or learned throughout the experiment; e.g., Wolfe,
1994). When filling in makes a distractor look more like a target object, participants will be more likely to attend to those distractor objects, thus slowing target detection. In general, search is fast when distractors are dissimilar to the target and slower and less guided when they are similar to the target (Alexander & Zelinsky,
2011,
2012; Duncan & Humphreys,
1989; Treisman,
1991). These effects are graded, with better matches between the target template and the target as it appears in the search display resulting in targets being more quickly fixated and more likely to be the first objects fixated (Schmidt & Zelinsky,
2009; Vickery, King, & Jiang,
2005; Wolfe & Horowitz,
2004). Because these better matches—and the resulting improvements in the direction of eye movements to the target—are thought to be the result of a stronger guidance signal (Wolfe,
1994; Wolfe, Cave, & Franzel,
1989), for the remainder of this article we will refer to two measures as reflecting increases in the strength of the guidance signal: increases in the rate at which the target is the first object fixated, and faster times to fixate the target.
Surprisingly little is known about the target template, despite decades of research. What features can guide search are underspecified (Wolfe & Horowitz,
2004,
2017), and it is still unclear precisely how those features are then consolidated into a target template. The target template is thought to reside in visual working memory (Woodman, Luck, & Schall,
2007), where load is known to affect amodal completion (H. Lee & Vecera,
2005), which suggests that target templates might use at least some filled-in information. To the extent that the target template includes restored features, this suggests that target templates are formed at least partly from higher level information rather than just the pixels that are presented, including features other than the basic visual features which have traditionally been considered as forming the template.
In two experiments, the present study examines whether restoration occurs during search tasks, both at preview and in parallel across visual search displays. In doing so, we are extending the literature relating amodal completion and visual search to a different, higher level filling-in mechanism, and extending the stimulus class previously used to study filling-in processes in visual search contexts to real-world objects. We rule out possible roles of other filling-in processes in our data in
Experiment 1 by placing occluders such that they cover half of an object and objects do not extend across the occluders, and in
Experiment 2 by removing half of the object, rather than using a visible occluder. In both cases, there are no fragments that can be linked across occluders, and low-level amodal completion should not engage—as in
Figure 1D. These cases do, however, provide a situation where higher level restoration mechanisms may provide valuable information to the visual system and are therefore likely to be used.
In the present work, we measured completion in terms of how directly gaze was guided to targets that appeared occluded or not in the search display. We chose this approach, rather than creating displays where completion could make the target more similar to distractors, because it allows for the same distractors to be counterbalanced across conditions (removing item-specific confounds that different distractors could create) and provides us with a direct measure (eye movements) rather than solely inferential response-time measures of guidance. There is substantial evidence that attention and eye movements are actively directed or guided toward objects that are more similar to target objects (for reviews on guidance of, respectively, eye movements and attention, see Chen & Zelinsky,
2006; Wolfe,
1994).
Unlike the logic of previous studies of occlusion during visual search tasks, in which participants searched for the same target across blocks of trials (He & Nakayama,
1992; Rauschenberger & Yantis,
2001; Rensink & Enns,
1998; Wolfe et al.,
2011), the present study manipulated occlusion of the target at two phases of the search task: during the preview and during the search display. Occlusion during preview, at the time of target encoding, has not previously been manipulated independently of occlusion during the actual search display. This manipulation may provide some insight into the character of the target template, while simultaneously avoiding a potential confound. Specifically, if the preview is not explicitly manipulated, then it is not clear what representation participants are using. If the task is to search for a square occluded by a circle with no spatial separation, what would participants search for? They might use the unique feature (the notch) that would speed search or might use a filled-in target template (a square behind a circle), which would result in inefficient search as a result of similarity to the distractors. The resulting inefficient search would be due not to preattentive filling in (or any completion in the search display) but rather due to the use of a different target definition. The present design both avoids this potential confound and—importantly—tests whether restored features can inform the target template. Do participants use a restored description of a target when it is occluded at preview?