Purchase this article with an account.
Richard Yao, Daniel J. Simons, John E. Hummel; The Scene Superiority Effect. Journal of Vision 2010;10(7):1264. doi: https://doi.org/10.1167/10.7.1264.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In the word superiority effect, two letters are easier to discriminate when presented in the context of a real word, even when the rest of the word is non-predictive of the target letter. For instance, people can better discriminate “word” from “work” than they can discriminate “d” from “k.” The effect disappears when the letters appearing with the target form a non-word letter string (e.g., “orwk” and “orwd”). We explored whether this context effect for letters and words would generalize to objects in scenes. Subjects identified rapidly presented objects that were drawn from a single semantic category (i.e., “offices”). We used an adaptive staircase algorithm (QUEST) to set object detectability at 40% accuracy when viewed against a phase-scrambled scene background. Subjects then performed the detection task with objects superimposed on scene backgrounds that varied in semantic consistency (offices or beaches) and orientation (upright or inverted). As for the word superiority effect paradigm, the background was irrelevant to the object detection task and was unpredictive of which object appeared on any given trial. Consistent with the word superiority effect, subjects were better able to identify target objects when they were displayed on semantically consistent backgrounds. Consistent with subject reports that they were able to ignore the scene entirely as the experiment progressed, the effect disappeared after approximately 100 trials. Together, these results suggest that the scene context can facilitate object identification, but only when the scene semantics are processed.
This PDF is available to Subscribers Only