July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Arbitrary sounds facilitate visual search for congruent objects
Author Affiliations
  • L. Jacob Zweig
    Department of Psychology, Northwestern University
  • Satoru Suzuki
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University
  • Marcia Grabowecky
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University
Journal of Vision July 2013, Vol.13, 1083. doi:10.1167/13.9.1083
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      L. Jacob Zweig, Satoru Suzuki, Marcia Grabowecky; Arbitrary sounds facilitate visual search for congruent objects. Journal of Vision 2013;13(9):1083. doi: 10.1167/13.9.1083.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Multisensory correspondences are thought to form through Hebbian-type learning, whereby repeated exposure to coincident auditory and visual signals promotes the formation of auditory-visual associations. Previous research has demonstrated that feature-based multisensory correspondences influence perceptual processing (Iordanescu et al., 2010). For example, when searching for keys among clutter, participants are faster to detect the spatial location of the keys when the characteristic sound of keys jingling is played at the onset of search. It remains unclear, however, whether such facilitation in perceptual processing relies on natural correspondences or whether newly learned associations might similarly facilitate processing. In the present study, we demonstrate that learned arbitrary sounds facilitate visual processing of an associated object, despite the absence of spatial information in the auditory signal. We trained participants to associate two pairs of visual stimuli each with a single arbitrary sound. Visual stimuli consisted of kaleidoscope images that varied in spatial frequency, shape, size, and color. Learning was verified with a two-alternative task in which participants had to correctly match a presented sound to the associated kaleidoscope image. Following training, participants performed a visual search for target kaleidoscopes from one of the previously learned sets with simultaneous presentation of target-congruent or target-incongruent sounds. We demonstrated that a non-spatially informative target-congruent sound speeds visual search to the associated target object. Accuracy did not significantly differ across the two sound congruency conditions and there was no evidence of a speed-accuracy trade-off. These results suggest audiovisual integration may facilitate visual processing and detection by increasing the salience of target objects.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.