September 2015
Volume 15, Issue 12
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Cross-modal facilitation driven by metacognition
Author Affiliations
  • Derek Arnold
    Perception Lab, School of Psychology, The University of Queensland
  • Cailem Murray
    Perception Lab, School of Psychology, The University of Queensland
  • Alan Johnston
    Psychology, University College London
Journal of Vision September 2015, Vol.15, 715. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Derek Arnold, Cailem Murray, Alan Johnston; Cross-modal facilitation driven by metacognition. Journal of Vision 2015;15(12):715.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Some properties can be signaled by multiple sensory modalities. This creates an opportunity to benefit, in terms of heightened sensitivity, from redundancy. Here we show, for audio-visual rate perception, that benefits resulting from cross-modal redundancy involve metacognition. People judged which of two intervals had contained a more rapidly changing stimulus, defined either by luminance flicker, auditory flutter, or both. People then rated decisional confidence, as high or low. Overall, people were more sensitive in audio-visual trials than in either auditory or visual trials. This advantage was not, however, apparent for trials involving equal levels of confidence. High-confidence audio-visual performance was equivalent to high-confidence trials concerning the best uni-modal signal for that participant. Low-confidence audio-visual performance was equivalent to low-confidence uni-modal performance, averaged across presentation modality. As there was a high correlation between performance and confidence, these data suggest cross-modal facilitation was based on metacognitive processes – on accurate and reportable estimates of the precision with which rate had been encoded in either modality on a trial-by-trial basis. This would be advantageous overall, as cross modal presentation would enhance the probability of a disproportionately precise rate estimate having been encoded in one of the two modalities. This advantage would be lost for comparisons of high confidence trials, as audio-visual performance is being compared to uni-modal trials marked by similarly high levels of confidence and performance. Cross-modal benefits would be lost for comparisons of low-confidence trials, as low-confidence signals that no precise estimate has been encoded in either modality during audio-visual presentations. We suggest that cross-modal facilitation in other contexts will also involve metacognitive processes.

Meeting abstract presented at VSS 2015


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.