Abstract
Tsui and Tseng (2011) found bimodal presentation of grammar-like rules (AAB) helps 8-10 month preverbal infants to learn otherwise when the rule is presented with visual presentation of emotional cartoon faces alone (happy-upset-happy faces) or auditory presentation of corresponding emotional sounds (laughing-crying-laughing) alone. But bimodal facilitation did not apply when geometry shapes accompanied by recorded syllables. We investigated what constitutes the difference in two experiments.
In Experiment 1, we tested whether visual and audio congruency was critical by habituating 15 infants to the same AAB rule with emotional cartoon faces (e.g. happy face) coupled with incongruent emotional sounds (e.g. crying sound). We did not find difference between novel and learnt rules at dis-habituation looking time as learning evidence.
In Experiment 2, we tested whether emotional content is essential for bimodal facilitation by employing an emotionless cartoon face speaking syllables to habituate seventeen 8-10-month-olds with the same AAB rule. At dis-habituation, infants looked longer significantly at novel rules (ABB and ABA), demonstrating successful acquisition of the habituated rule.
Our results indicate that relevancy and congruency in audio-visual pair both matter to facilitate infants’ abstract rule learning. Syllables associated from a human face are better than when associated with arbitrary visual shapes. Infants’ learning effect is compromised when the emotional congruence of the audio and visual stimuli is reduced. It suggests that an object-based cross-sensory integration occurs before the abstract rule is extracted from bimodal presentation.
Meeting abstract presented at VSS 2013