July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Multisensory Integration in Visual Pattern Recognition: Music Training Matters
Author Affiliations
  • Avi Aizenman
    Volen Center for Complex Systems & Department of Psychology, ̷2Brandeis University, Waltham MA
  • Jason Gold
    Department of Psychological and Brain Sciences, Indiana University, Bloomington IN
  • Robert Sekuler
    Volen Center for Complex Systems & Department of Psychology, ̷2Brandeis University, Waltham MA
Journal of Vision July 2013, Vol.13, 1082. doi:10.1167/13.9.1082
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Avi Aizenman, Jason Gold, Robert Sekuler; Multisensory Integration in Visual Pattern Recognition: Music Training Matters. Journal of Vision 2013;13(9):1082. doi: 10.1167/13.9.1082.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans’ talent for pattern recognition requires collaboration between perception and memory. Recently, Michalka et al. (2012) showed that a task involving rapid presentation of visual stimuli can activate cortical regions normally implicated in auditory attention. As music training is known to fine tune the human auditory system (Kraus & Chandrasekaran, 2010), we hypothesized that such training might improve performance on rapid temporal tasks, not only for auditory stimuli, but also for visual and multisensory stimuli as well. To test this hypothesis, we adapted a paradigm introduced at VSS 2012 (Aizenman et al., 2012). Subjects received randomly-generated eight-item long sequences of luminances, tones, or both together, all presented at 8Hz. Subjects judged whether or not the second four items in a stimulus sequence were an identical repetition of the first four items presented. Performance was evaluated as values of d’. Four trial types were presented in separate blocks: Auditory alone, Visual alone, AV-congruent (luminance sequences accompanied by auditory tones whose frequencies were cross-modally matched to the luminances), and AV-incongruent (luminance sequences accompanied by randomly generated, incongruent tones). For both types of AV trials, subjects were instructed to base their judgments on the luminances alone, ignoring the tones when deciding whether the luminance sequence repeated. Fifteen subjects with music training (6-15 years) and fifteen subjects with minimal music training (<3 years) were tested. Overall, music-trained subjects outperformed non-music trained subjects (p<.01). For both groups, performance was significantly better on AV-congruent trials than on all other trial types. When auditory and visual sequences are in perceptual correspondence, subjects can exploit this correspondence to enhance judgments that are nominally visual. Our results are consistent with the hypothesis that music training improves performance with rapidly presented stimulus sequences, even for visual sequences.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×