Abstract
Search studies typically present displays immediately following a target cue, but real world search often occurs after long delays between target designation and search. How does search guidance change with this delay? We hypothesized that guidance for pictorially previewed items may decrease over time as details fade from visual WM, while guidance for semantically-defined targets may increase with delay as subjects build a more detailed target representation. We presented either a picture or a semantically-defined target cue (e.g., a picture of a green apple or the text “Green Apple”), followed by a 0ms, 600ms, 3000ms, or 9000ms delay period, then a search display depicting 5 realistic objects. Consistent with previous work (Schmidt & Zelinsky, 2007), we found overall greater guidance to pictorial than semantically-defined targets. However, and unexpectedly, we found stronger guidance in the delay conditions (compared to no-delay conditions) for both pictorial and textually-defined targets. Specifically, subjects in the delay conditions fixated the target sooner (565ms vs. 609ms), required fewer fixations to reach the target (3.41 vs. 3.53) and made a greater percentage of their initial saccades to the target (46% vs. 38%). These initial targeting saccades also had a shorter latency (184ms vs. 216ms), suggesting a genuine benefit rather than a speed-accuracy tradeoff. We interpret our data as suggesting the need for a period to consolidate information in visual WM to mediate search guidance. For pictorial cues, this may involve extracting the most salient features from the pictorial description, thereby creating a more compact representation that can reside in visual WM. For semantically-defined targets, a visual WM representation would have to be constructed from information provided in the text cue. We speculate that both processes occur most efficiently in the absence of newly arriving visual information, explaining why a delay following target designation benefits search guidance.