September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
What we don’t see shapes what we see: peripheral word semantics gates visual awareness
Author Affiliations & Notes
  • Shao-Min (Sean) Hung
    Waseda Institute for Advanced Study, Waseda University
    Faculty of Science and Engineering, Waseda University
  • Sotaro Taniguchi
    Faculty of Science and Engineering, Waseda University
  • Akira Sarodo
    Faculty of Science and Engineering, Waseda University
  • Katsumi Watanabe
    Faculty of Science and Engineering, Waseda University
  • Footnotes
    Acknowledgements  We thank the sub-award under the Aligning Consciousness Research with US Funding Mechanisms by Templeton World Charity Foundation (TWCF: 0495) and Waseda University Grants for Special Research Projects.
Journal of Vision September 2024, Vol.24, 276. doi:https://doi.org/10.1167/jov.24.10.276
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shao-Min (Sean) Hung, Sotaro Taniguchi, Akira Sarodo, Katsumi Watanabe; What we don’t see shapes what we see: peripheral word semantics gates visual awareness. Journal of Vision 2024;24(10):276. https://doi.org/10.1167/jov.24.10.276.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Empirical data from vision sciences indicates the linguistic constraint on our perception, particularly showing a categorical benefit from semantically constructing our visual experience. However, in the periphery, our visual acuity decreases dramatically, and extracting semantic information through word recognition becomes inevitably difficult. The current study directly contended with this issue by examining whether peripheral word semantics can influence our vision. We leveraged a peripheral sound-induced flash illusion where the number of visual flashes is often dominated by the auditory beeps delivered. In each trial, two or three Mandarin characters were flashed briefly from left to right in the periphery with number-congruent or number-incongruent beeps. We first successfully replicated the original illusions. That is, incongruent audiovisual presentations led to auditory dominance. For example, when three characters were presented together with two beeps, observers often reported perceiving only two characters. On the other hand, an additional beep induced an illusory visual percept. Crucially, we found that when the three characters formed a word, the lack of a concurrent beep (i.e., 3 characters with 2 beeps) suppressed the awareness of an existing character to a greater extent. Intriguingly, participants’ successful recognition was not crucial. A separate experiment replicated the effect with participants who were unable to recognize the words, corroborating the implicit nature of the effect. When the conventional reading direction was disrupted by reversing the presentation order, the effect disappeared. Furthermore, we adopted Japanese, a language with both logographic (kanji) and phonetic (hiragana and katakana) writing systems, and showed that this effect was specific to the logographic system. These findings demonstrate the capacity of our visual system to extract peripheral semantic information without word recognition, which in turn regulates our visual awareness.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×