Purchase this article with an account.
Effie Pereira, Monica Castelhano; Guidance during Visual Search in Real-World Scenes: Scene Context vs. Object Content. Journal of Vision 2011;11(11):1320. doi: 10.1167/11.11.1320.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A number of past studies have shown that when searching for an object in a scene, eye movements are guided towards the target based on scene context by selecting the most relevant areas. Other studies have shown that fixations are directed to high spatial frequency information, corresponding to objects in the scene (van Diepen & Wampers, 1998). In a previous study, Castelhano and Henderson (2007) showed that when no immediate visual information is available (via moving-window paradigm), scene context can dominate search strategies. In the present study, we examined whether search strategies are equally affected when information regarding the scene context and placement of object content are immediately available and juxtaposed. Participants searched for a target using a gaze-contingent moving-window paradigm. The original search scene was shown foveally (inside the window), while the scene information was manipulated extra-foveally across four conditions: (1) Full Scene: search scene excluding the target; (2) Empty Scene: search scene with all objects removed; (3) Fractioned Scene: search scene with only a small number of objects; and (4) No Scene: a black screen control. Thus, the Empty Scene provided scene context information alone, while the Fractioned Scene provided additional information about object content that did not overlap with the target. While results showed search was best in the Full Scene condition and worst in the No Scene condition, we found across a number of eye movement measures that there was no difference between the Fractioned and Empty Scene searches. This pattern was seen in the latency to first target fixation, the number of fixations before the first target fixation and in reaction time. The picture that emerges seems to support previous studies indicating that scene context information may be more useful in guiding eye movements during search than object-based features.
This PDF is available to Subscribers Only