Abstract
Objects hardly ever appear in isolation, but are usually embedded in a larger scene context. This context — determined e.g. by the co-occurrence of other objects or the semantics of the scene as a whole — has large impact on the processing of each and every object. Here I will present a series of eye tracking and EEG studies from our lab that 1) make use of the known time-course and neuronal signature of scene semantic processing to test whether seemingly meaningless textures of scenes are sufficient to modulate semantic object processing, and 2) raise the question of its automaticity. For instance, we have previously shown that semantically inconsistent objects trigger an N400 ERP response similar to the one known from language processing. Moreover, an additional but earlier N300 response signals perceptual processing difficulties that go in line with classic findings of impeded object identification from the 1980s. We have since used this neuronal signature to investigate scene context effects on object processing and recently found that a scene's mere summary statistics — visualized as seemingly meaningless textures — elicit a very similar N400 response. Further, we have shown that observers looking for target letters superimposed on scenes fixated task-irrelevant semantically inconsistent objects embedded in the scenes to a greater degree and without explicit memory for these objects. Manipulating the number of superimposed letters reduced this effect, but not entirely. As part of this symposium, we will discuss the implications of these findings for the question as to whether object-scene integration requires attention.
Meeting abstract presented at VSS 2017