Purchase this article with an account.
Jun Kawahara; Cross-modal contextual cueing: Auditory andvisual association guides spatial attention. Journal of Vision 2007;7(9):1059. doi: 10.1167/7.9.1059.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitate visual searches (the contextual cueing effect). Recently, Ono, Jiang, and Kawahara (2005) found that the spatial context acquired in one trial influences the visual search in the next trial and proposed the ubiquitous statistical learning account, in which the visual system is sensitive to all kinds of statistical consistency. The present study examined whether a contextual cueing effect can develop from the association between auditory events and visual target locations. In the training phase, participants heard a meaningless auditory stimulus for 2 sec, and then performed a visual search in which they searched for a T among Ls. In every trial, the target location could be reliably predicted from the preceding auditory stimulus. In the testing phase, the auditory/visual pairings were disrupted, so that the initial auditory stimulus did not predict the target location. We examined how the search performance (reaction time) improved during the training phase and whether the improvement disappeared when the association was removed in the testing phase. Given the ubiquitous statistical learning account as a governing rule, search performance should be improved in the training phase and impaired in the testing phase. As a control condition, the association between auditory and visual stimuli was maintained in the testing phase. The results indicate that visual search performance was impaired in the testing phase in the experimental condition only. None of the participants noticed the auditory/visual association. Experiment 2, in which the auditory stimuli were presented 1 sec before the visual display, replicated the cueing effect. These results suggest that visual attention can be guided implicitly by cross-modal association and extend the ubiquitous statistical learning account in that our cognitive system can acquire consistency in multi-modal domains.
This PDF is available to Subscribers Only