Purchase this article with an account.
Melissa Vo, Tim Cornelissen, Sabine Oehlschlaeger; When scenes and words collide: Irrelevant background scenes modulate neural responses during lexical decisions.. Journal of Vision 2015;15(12):574. doi: 10.1167/15.12.574.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Usually, linguistic operations and visual-perceptual operations are studied separately in domain-specific experimental paradigms. However, there has been evidence that language and image processing interact behaviorally in a purely linguistic task (Võ & Wolfe, VSS 2014). Here we test whether brain responses during lexical decisions are modulated by an irrelevant, visual background scene, via analysis of EEG signals. Participants were presented with a background scene and a location cue before a string of letters appeared at the pre-cued scene location. The sole task was to decide whether the letter string formed a word or non-word. Words could either be congruent with the scene (‘SOAP’ on sink), semantically incongruent (‘EGG’ on sink), syntactically incongruent (‘SOAP’ on towel rack – i.e., semantically congruent but it in a wrong relative location), or double-incongruent (‘EGG’ on towel rack). We found that words that were semantically incongruent with respect to the background scene triggered a negative deflection compared to the consistent words about 400ms after word onset. In the language domain, this N400 response is known to signal difficulties in the semantic integration of a word with its sentence context. Interestingly, this incongruity effect between scene and words was more pronounced on left- rather than right-hemispheric electrodes, possibly due to the involvement of linguistic processes. Semantically congruent words presented in improbable scene locations, on the other hand, did not significantly affect brain responses. Thus, the syntactic placement of a word on an irrelevant background scene did not modulate neural responses to the extent that semantically incongruent words did. We conclude that a brief visual scene preview — even if task-irrelevant — automatically interacts with linguistic operations on at least the semantic processing level. Therefore, language and visual scene processing may share common parsing mechanisms that are efficiently integrated to function as a unitary whole.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only