October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Statistical learning of concurrent auditory signals
Author Affiliations & Notes
  • Jozsef Fiser
    Central European University, Budapest, Hungary
    Center for Cognitive Computation, Budapest, Hungary
  • Tunde Szabo
    Central European University, Budapest, Hungary
    Center for Cognitive Computation, Budapest, Hungary
  • Benjamin Markus
    Central European University, Budapest, Hungary
    Center for Cognitive Computation, Budapest, Hungary
  • Marton Nagy
    Central European University, Budapest, Hungary
    Center for Cognitive Computation, Budapest, Hungary
    Eotvos Lorand University, Budapest, Hungary
  • Footnotes
    Acknowledgements  This work has been supported by the grant ONRG - NICOP - N62909-19-1-2029.
Journal of Vision October 2020, Vol.20, 444. doi:https://doi.org/10.1167/jov.20.11.444
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jozsef Fiser, Tunde Szabo, Benjamin Markus, Marton Nagy; Statistical learning of concurrent auditory signals. Journal of Vision 2020;20(11):444. doi: https://doi.org/10.1167/jov.20.11.444.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Due to the highly sequential nature of auditory information and its close link to speech in humans, auditory statistical learning (SL) has been viewed predominantly as a special learning related to segmentation in language development. Meanwhile in other modalities, SL has been conceptualized as a general-purpose learning ability of information presented in parallel, which is crucial for developing internal representations used by everyday behavior. To resolve this discrepancy, we investigated whether being exposed to brief auditory stimuli presented concurrently without any sequential structure across trials would lead to the same sort of automatic statistical learning as reported earlier with complex spatial patterns in the visual modality. Eight unique sound-segments were created by modifying everyday sound patterns such as rolling marble balls, dropping objects, etc., which were paired into four sound pairs. Following the standard SL paradigm, familiarization auditory “scenes” were created by randomly pairing two of the pairs for each scene so that elements of a pair never appeared without each other during the familiarization, but they were paired with all other sounds equally often. Thirty-six participants (Exp1=14, Exp2=22) listened to the sequence of 360 scenes in random order, to all four segments of each scene together for 1.5 sec, without any instruction beyond asking to pay attention. Next, in the test session, participants chose which of two sound pairs (a true pair vs. a random combo) sounded more familiar. In Exp1, sensitivity to joint probability, in Exp2, sensitivity to conditional probabilities of sounds were tested. In both experiments, participants showed a significantly above-chance preference (p<0.001) for the pairs with a higher joint/conditional probability, fully replicating earlier results obtained in the visual domain. This suggests that rather than being specially language-related, auditory information is used by general-purpose SL for shaping internal representation the same way as in other modalities.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×