September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Auditory modulations on visual perception and metacognition
Author Affiliations & Notes
  • Da Li
    Department of Psychology, National Taiwan University
  • Yi-Chuan Chen
    Department of Medicine, Mackay Medical College
  • Su-Ling Yeh
    Department of Psychology, National Taiwan University
    Graduate Institute of Brain and Mind Sciences, National Taiwan University
    Neurobiology and Cognitive Science Center, National Taiwan University
    Center for Artificial Intelligence and Advanced Robotics, National Taiwan University
Journal of Vision September 2019, Vol.19, 273d. doi:https://doi.org/10.1167/19.10.273d
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Da Li, Yi-Chuan Chen, Su-Ling Yeh; Auditory modulations on visual perception and metacognition. Journal of Vision 2019;19(10):273d. https://doi.org/10.1167/19.10.273d.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual sensitivity (d’) of a picture can be enhanced crossmodally by the presentation of an auditory cue that is semantically congruent rather than incongruent. Nevertheless, whether the metacognitive sensitivity (meta-d’) of a picture processing (i.e., the ability to discriminate whether one’s own perceptual judgment is correct) can be modulated by crossmodal semantic congruency remains unclear. We examined this issue by measuring the d’ and meta-d’ in a picture detection task following an auditory cue, and their quotient (meta-d’/d’, called M-ratio) is an index of metacognitive efficiency which controls the influence of task difficulty. On each trial, either an object or a scrambled picture was presented briefly and sandwiched by two random-dot masks, and an auditory cue was presented before or simultaneously with the picture. Participants had to detect the presence of an object picture, and then to rate their confidence regarding the detection judgment. The auditory cue and the object picture were either congruent (e.g., a dog barking and a dog picture) or incongruent (e.g., a piano note and a dog picture). When a naturalistic sound was presented 350 ms before or simultaneously with the picture, and when a spoken word was presented 1000 ms before or simultaneously with the picture, the d’ and meta-d’ were higher in the congruent than in the incongruent condition. Interestingly, the M-ratio was higher in the congruent than in the incongruent condition only when the spoken word and the picture were presented simultaneously, while no such difference was observed in the other three conditions. Hence, hearing a semantically-congruent (as compared to incongruent) auditory cue can facilitate not only the d’, but also the meta-d’ of visual processing. Seeing an object while hearing its name is unique in that the visual processing is highly efficient from the perceptual up to the metacognitive level.

Acknowledgement: Ministry of Science and Technology in Taiwan (MOST 107-2410-H-715-001-MY2 and MOST 107-2410-H-002-129-MY3) 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×