August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
The effects of selective and divided attention on sensory integration
Author Affiliations
  • Brian Odegaard
    University of California‐Los Angeles, Department of Psychology
  • David R. Wozny
    Carnegie Mellon University, Department of Psychology
  • Ladan Shams
    University of California‐Los Angeles, Department of Psychology
Journal of Vision August 2012, Vol.12, 658. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brian Odegaard, David R. Wozny, Ladan Shams; The effects of selective and divided attention on sensory integration. Journal of Vision 2012;12(9):658.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

While top‐down attention and multisensory integration have been investigated extensively as independent concepts, little is known about how these processes interact, and how these interactions may be affected by attentional load. We investigated this topic using two paradigms: in one experiment, participants localized auditory, visual, and audiovisual stimuli in single‐response conditions requiring attention to one sensory modality, and dual‐response conditions requiring attention to two modalities. In a second experiment, participants again performed the spatial localization task and attended either to one sensory modality or both sensory modalities, but a secondary detection task was included in each condition to equate attentional load across conditions. To analyze data from both experiments, we used a Bayesian causal inference model (Wozny et al., 2010) to computationally characterize the effects of selective and divided attention by comparing priors and likelihoods across attention conditions. Results from Experiment 1 indicate that selective attention to one modality results in more precise sensory representations than divided attention (to both modalities) in both the visual and auditory domains (demonstrated by reduced variance in the likelihood distributions), and leads to a stronger tendency to integrate stimuli across modalities (as evidenced by an increase in the prior bias for perception of a common cause). Results from the second experiment did not show any significant differences between conditions in which attention is divided within or across modalities. If each modality had its own separate attentional resources, then attending to two tasks across modalities should have been less disruptive (i.e. smaller likelihood variance) than attending to two tasks within the same modality. Thus, the absence of differences between crossmodal and within‐modality divided attention suggests that the auditory and visual modalities do not have independent processing resources, at least in a spatial localization task.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.