Purchase this article with an account.
Florian Goller, Soonja Choi, Ulrich Ansorge; Whereof one cannot speak: How language and capture of visual attention interact. Journal of Vision 2018;18(10):472. doi: 10.1167/18.10.472.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Using a contingent capture paradigm, we examined not only whether but also how deeply language influences visual perception. We tested native speakers of Korean and German, two languages that semantically categorize spatial relations in fundamentally different ways: German (similar to English) categorizes spatial relations based on containment (in) and support (auf), whereas Korean categorizes by – and thus semantically distinguish between - tight-fit (kkita) vs. loose-fit (nehta, nohta). We investigated whether participants' native language makes them more or less sensitive to features of visual stimuli that resemble tight-fit or loose-fit. We let Korean and German speakers search for a predefined colour target among distractors. Unbeknownst to the participants, targets were also implicitly signalled by features of a different sematic domain, i.e. spatial relations of tight-fit or loose-fit. We found that only Koreans spontaneously picked up on this implicit feature of spatial fitness (tight-fit or loose-fit) and used it to aid their search for targets. As these spatial concepts are not grammticalised in the German language, our results demonstrate that there is an influence of language-specific semantics of the native language on very basic processes of visual attention.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only