August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Do Audiovisual Semantic Congruency Effects Exist Without Visual Awareness?
Author Affiliations & Notes
  • Kun Zhou
    School of Information Science, Yunnan University, 650091 Kunming, China
  • Jan Drewes
    Institute of Brain and Psychological Science, Sichuan Normal University, 610066 Chengdu, China
  • Weina Zhu
    School of Information Science, Yunnan University, 650091 Kunming, China
  • Footnotes
    Acknowledgements  supported by the National Natural Science Foundation of China (61263042, 61563056).
Journal of Vision August 2023, Vol.23, 4987. doi:https://doi.org/10.1167/jov.23.9.4987
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kun Zhou, Jan Drewes, Weina Zhu; Do Audiovisual Semantic Congruency Effects Exist Without Visual Awareness?. Journal of Vision 2023;23(9):4987. https://doi.org/10.1167/jov.23.9.4987.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans can categorize visual pictures faster when they hear audio that is semantically congruent with the visual picture (Chen & Spence, 2018). Simultaneously, numerous studies found some high-level visual processing occurs even without awareness. We investigated whether and how the cross-modal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures still exist without visual awareness. To examine the time course and categorical specificity of cross-modal semantic congruency effects in unawareness, auditory cues were presented at 5 different stimulus onset asynchronies (SOAs: -1000, -750, -500, -250, 0) with respect to the picture, and participants made speeded categorization judgments (living vs. nonliving) in 2AFC and CFS paradigms. Audio and pictures (e.g., cats) formed four congruency relationships: congruent (cat audio), related (dog audio), incongruent (guitar audio), noise, and no-sound. In the aware condition, the response time of congruent (838ms) was faster than related (878ms), incongruent (880ms) and white noise (891ms). In the unaware condition, the response time of congruent (2451ms) was faster than related (2510ms), incongruent (2532ms) and white noise (2548ms). In both awareness and unawareness, the response time to naturalistic sound (866ms, 2503ms) was faster than spoken words (877ms, 2514ms), and the difference between congruent and incongruent showed the same tendency, only the difference with spoken words (54ms, 110ms) was bigger than with naturalistic sound (31ms, 55ms). There was no main SOA effect. For both naturalistic and spoken words, in almost all conditions, congruency effects were significant with and without awareness. On the other hand, for the naturalistic sound, the congruency effect showed on early SOAs (-250ms and 0) in the aware, but on even earlier SOAs (-1000ms) in the unaware condition. Congruency*SOA interaction was significant. We showed that the cross-modal semantic congruency effects found in the aware condition similarly exist in unawareness.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×