Abstract
Dividing attention across sensory modalities has been shown to impair performance (e.g. Ciaramitaro et al., 2017), suggesting attention is a limited resource shared across the senses. Musical training often involves the flexible use of two or more senses concurrently (e.g. reading musical scores and listening to sounds) and has been shown to reduce the cost of unimodal dual-task performance (Moradzadeh et al., 2015). Yet, little is known regarding how musical experience might reduce the cost of crossmodal divided attention. Here we compared the cost of crossmodal divided attention in a dual-task in musicians and non-musicians combining psychophysics and pupillometry. 15 amateur musicians (8F, 10+ years of musical training) and 17 non-musicians (10F, 0-5 years of musical training) participated. Each trial contained two intervals, each with a binaural white noise sound and a RSVP of letters at fixation. A Tobii eye tracker was used to monitor eye position and pupil diameter. Participants reported which interval contained an amplitude-modulated sound, with modulation depth varying across trials. Concurrently, participants reported either, in the easy visual condition, which interval contained white letters (color detection), or in the hard visual condition, which interval contained more 'A's (quantity discrimination). We compared visual task accuracy, auditory thresholds, and mean baseline pupil dilation (-500 to 0ms relative to stimulus onset) across easy and hard conditions, to quantify the cost of crossmodal divided attention. We expected a smaller cost on auditory performance from attending a harder versus easier visual task in musicians compared to non-musicians. We found a smaller cost for musicians compared to non-musicians, as expected, but only in male, and not female participants. We found no difference in baseline pupil diameter across tasks and groups, suggesting participants were equally aroused/engaged in the experiment. Our results highlight the role of musical expertise in crossmodal attention.
Meeting abstract presented at VSS 2018