Abstract
We search for objects in environments that provide multiple cues to guide our attention. Contextual information facilitates object recognition and guides eye movement behavior (Biederman, 1972; Loftus & Mackworth, 1978; but see Henderson & Hollingworth, 1999). Current research either does not precisely define different types of contextual information (Greene, 2013) and/or focuses specifically on one type of context (e.g., object co-occurrence: Mack & Eckstein, 2011; scene gist: Torralba et al., 2006). In this work we define three types of contextual information (object co-occurrence, multiple object configuration, and background category) and assess their independent contributions to eye movements and behavioral performance during visual search. Eye-tracked participants (n=160) completed a yes/no task after searching for a target object in 48 realistic, computer rendered scenes that contained all, none, or any combination of each of the three types of contextual information. Retinal eccentricity and local contrast of the target and background was controlled across conditions. The type of contextual information had a significant effect on the detectability of the target (F = 9.1, p< .001) and on the distance of the closest fixation to the target location (F = 46.24, p< .001). Object co-occurrence and multiple object configuration contribute to performance effects (each p< .05 when compared to the no context condition), whereas background category failed to affect performance in all comparison cases p>.05). Analysis of the sensitivity (d’) difference between conditions with individual and combined contextual information failed to reject the hypothesis that the independent contributions of each type of context were additive. Our results suggest that multiple types of contextual information contribute independently to visual search performance and that they may do so in an additive way. This suggests that it will be useful to taxonomize types of scene context and improve the precision of definitions of contextual information in future research.
Meeting abstract presented at VSS 2015