September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Multimodal integration for estimating event rates
Author Affiliations
  • Paul Schrater
    Departments Psychology and Computer Sci. & Eng., University of Minnesota, USA
  • Anne Churchland
    Cold Spring Harbor Laboratories, USA
Journal of Vision September 2011, Vol.11, 773. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Paul Schrater, Anne Churchland; Multimodal integration for estimating event rates. Journal of Vision 2011;11(11):773.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Separate lines of research have revealed that perceptual decisions about unreliable sensory information are driven by processes that integrate evidence across time or across modalities. Here we investigate the conditions under which subjects will integrate sensory information across both time and modalities. We presented subjects with multi-modal event streams, consisting of a series of noise-masked tones and/or flashes of light. Subjects made judgments about whether the event rate was high or low. Combining across modalities could improve performance in two ways: by improving the detectibility of congruent auditory and visual events, or, more abstractly by combining rate estimates that are separately generated within each modality. Performance improved when stimuli were presented in both modalities (cue-combination condition) compared to when stimuli were presented in a single modality. Importantly, this improvement was evident both when the auditory and visual event streams were played synchronously and asynchronously. The enhancement of rate estimates we observed for asynchronous streams could not have resulted from improved detection of individual events, which argues strongly that the subjects combined estimates of overall rates that were computed separately for auditory and visual inputs. Moreover, we show that subjects' performance agrees with a Bayesian statistical observer that optimally combines separate rate estimates for auditory and visual inputs.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.