Abstract
Contextual Cueing experiments have shown that subjects learn the relationship between a target object and the context when the context is predictive of target location. Implicit learning of context is thought to facilitate search for objects embedded in complex displays. However it is unclear if cueing effects generalize to natural situations. We therefore performed a visual search task in an immersive virtual apartment with two rooms, linked by a corridor. Participants searched for a series of geometric target objects while eye movements were recorded. On each trial an object was presented on a screen placed in the corridor. Participants explored the two rooms until they located it. Context was manipulated by presenting three kinds of target objects. To evaluate the role of global context we compared Stable objects (that always appeared at the same location) with Random objects (that appeared at a new location in each trial). To examine the role of local context we presented Paired objects in close proximity: one object of the pair was the target during the first phase of the experiment while the other became the target later on. Objects not currently search targets were not visible, to avoid incidental learning for targets. We found that search time and number of fixations to locate the target decreased with repeated search episodes for all objects, but more so for stable objects, indicating that memory for the spatial location of the object is more important than memory for global context. However, for the Paired objects there was little advantage of experience in locating the neighboring object, which suggests that local context is not encoded in memory during previous search trials. Thus the role of contextual cueing in search in naturalistic environments appears to be weak relative to the role of spatial memory for previous search targets.
Meeting abstract presented at VSS 2014