Abstract
People have an amazing ability to identify objects and scenes within a glimpse of an eye. How automatic is this scene and object identification? Are scene and object semantics —let alone their semantic congruity— processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the current task? Objects that do not fit the semantics of the scene (e.g. a toothbrush in an office) are typically fixated longer and more often than consistent controls. In Experiment 1, we overlaid a letter T on photographs of indoor scenes and instructed fourteen participants to search for it. Some of the background images contained scene incongruent objects, but these provided no information about the target position. Despite their lack of relevance to the search, we found that subjects looked at incongruent objects longer and more often compared to congruent objects in the same position of the scene. In Experiment 2, we replicated these findings and — to better understand how aware participants had been of the objects — subsequently tested participants' memory for the critical objects in both a free recall and a 2AFC task. Participants did not remember more incongruent objects than congruent objects and performed no better than chance for incongruent objects during 2AFC suggesting that while being "stuck" the observers had no explicit memory of these objects. Attempting to diminish this object congruency effect, in Experiment 3 we overlaid a grid of search elements on the background images. Even though no longer statistically significant, the same pattern of results emerged for each measure of gaze behavior. Based on these results we argue that when we view natural scenes, scene and object identities are processed automatically. Moreover, even when irrelevant to our current search, a semantic mismatch between scene and object identity can modulate ongoing eye movement behavior.
Meeting abstract presented at VSS 2016