Abstract
Introduction: There is a growing literature showing how contextual cues guide and facilitate visual search (Chun & Yiang, 1998; Chen & Zelinsky; 2006; Eckstein et al., 2006; Torralba et al., 2006). However, all of these studies used 2-D images and a limited field of view. Here, we investigate the effects of contextual cues on search times and eye movements in a real 3-D scene. Methods: Observers were instructed to search for low visibility objects (e.g., straw, knife) placed on one of four elongated tables. Other distracting objects also cluttered the tables to increase the difficulty of the task. For each observer half the target objects were placed next to highly visible contextual cues (contextual condition; e.g., straw next to a red cup, knife next to plate) while the other half were placed on other table locations surrounded by unrelated items (non-contextual condition). Retinal eccentricity and local salience of the target against the background were matched for each object across conditions. Eye movements were monitored using an Applied Science Laboratories (ASL) mobile eye tracker which monitored the position of the right eye at an effective sampling rate of 30 Hz. Results: Mean human search times to fixate the target were shorter when the object co-occurred with a highly visible contextual cue than when it appeared elsewhere (1.97 vs. 3.8 seconds, p Conclusions: The results extend previous work with 2-D images to show that contextual cues also aid search in a more ecologically valid 3-D environment.