Abstract
We often search for co-occurring objects in a specified relationship, such as when we are told that X is above Y when shopping at a grocery store. In this situation, do we search for: just X, Y then X, X and Y simultaneously, X and Y and a vertical orientation, or specifically X above Y? The goal of this study was to determine how relational information is combined with pictorial information to generate a more effective target template. Search displays consisted of six pairs of real-world objects (three oriented horizontally and three vertically); the subject's task was to find a specific object pair. Target cues consisted of one or both objects from the target pair, presented either as they would appear in the search display, flipped across the pair's horizontal or vertical axis, or in a different orientation. These pictorial cues were accompanied by varying degrees of relational information, thereby creating the potential for subjects to mentally rearrange the pictorial information from the cue into a more accurate guiding representation. The relational manipulations included: a no information condition (subjects just saw the target pictures), an orientation condition (indicting whether the target pair would appear horizontally or vertically oriented), an exact alignment condition (indicating the orientation + the left/right/top/bottom alignment of one of the objects), and an identical condition (in which the pictures from the target cue exactly matched their appearance in the search display). We found stronger search guidance when both objects from the target pair were previewed compared to only one, and that guidance generally increased with the level of relational specificity provided about the target objects. These patterns suggest that a target's guiding representation is more elaborate than just a picture; when available, spatial relationships between objects can be used to refine this representation and improve search guidance.
This work was supported by NIMH grant 2 RO1 MH063748.