Abstract
Real-world scenes follow certain rules known as scene grammar, which allow for extremely efficient visual search. In the current work, we seek to understand what role objects, specifically anchor objects, hold during a visual search in 3D rendered scenes. Anchors are normally, large and diagnostic of the scene they are found in. However, what distinguishes anchors from other objects is the specific spatial information which they carry regarding other objects. Our lab previously showed that participants have a precise notion of where objects belong relative to anchors but not relative to other objects (Boettcher & Vo, 2016). In a series of two eye-tracking experiments we tested what role anchor objects occupy during visual search. In Experiment 1, participants searched through scenes for an object which was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, anchor. This lead to marginally faster reaction times and time to first fixation on the target. Additionally, subjects covered significantly less of the scene when the anchor was present compared to swapped. These marginal effects might underestimate the role of anchors owed to the sheer speed of the search, partly due to the guidance available from the physical features of the target. Therefore, in Experiment 2 participants were briefly shown a target-absent scene before the target cue. Search was then restricted to a gaze-contingent window. Participants were now significantly faster to respond, and the area of the scene which they covered was significantly smaller for trials with congruent compared to swapped anchors. Moreover, observers were marginally faster at fixating the target in the anchor present trials. Taken together, anchor objects seem to play a critical role in scene grammar, and specifically in executing it during visual search within scenes.
Meeting abstract presented at VSS 2017