Purchase this article with an account.
Jozsef Fiser, Tunde Szabo, Benjamin Markus, Marton Nagy; Statistical learning of concurrent auditory signals. Journal of Vision 2020;20(11):444. doi: https://doi.org/10.1167/jov.20.11.444.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Due to the highly sequential nature of auditory information and its close link to speech in humans, auditory statistical learning (SL) has been viewed predominantly as a special learning related to segmentation in language development. Meanwhile in other modalities, SL has been conceptualized as a general-purpose learning ability of information presented in parallel, which is crucial for developing internal representations used by everyday behavior. To resolve this discrepancy, we investigated whether being exposed to brief auditory stimuli presented concurrently without any sequential structure across trials would lead to the same sort of automatic statistical learning as reported earlier with complex spatial patterns in the visual modality. Eight unique sound-segments were created by modifying everyday sound patterns such as rolling marble balls, dropping objects, etc., which were paired into four sound pairs. Following the standard SL paradigm, familiarization auditory “scenes” were created by randomly pairing two of the pairs for each scene so that elements of a pair never appeared without each other during the familiarization, but they were paired with all other sounds equally often. Thirty-six participants (Exp1=14, Exp2=22) listened to the sequence of 360 scenes in random order, to all four segments of each scene together for 1.5 sec, without any instruction beyond asking to pay attention. Next, in the test session, participants chose which of two sound pairs (a true pair vs. a random combo) sounded more familiar. In Exp1, sensitivity to joint probability, in Exp2, sensitivity to conditional probabilities of sounds were tested. In both experiments, participants showed a significantly above-chance preference (p<0.001) for the pairs with a higher joint/conditional probability, fully replicating earlier results obtained in the visual domain. This suggests that rather than being specially language-related, auditory information is used by general-purpose SL for shaping internal representation the same way as in other modalities.
This PDF is available to Subscribers Only