Purchase this article with an account.
Jason S. Chan, Corrina Maguinness, Simon Dobbyn, Paul McDonald, Henry J. Rice, Carol O'Sullivan, Fiona N. Newell; Aurally aided visual search in depth using ‘virtual’ crowds of people. Journal of Vision 2010;10(7):886. doi: 10.1167/10.7.886.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
It is well known that a sound can improve visual target detection when both stimuli are presented from the same location along the horizontal plane (Perrott, Cisneros, McKinley, & D'-Angelo, 1996; Spence & Driver, 1996). However in those studies, the auditory and visual stimuli were always congruent along the depth plane. In previous experiments, we demonstrated that it is not enough for an auditory stimulus to be congruent along the horizontal plane; it must be congruent in depth as well. However, congruency along the depth plane may not be crucial in virtual reality (VR). It is well known that visual distance perception in VR suffers from a compression of space, whereby objects appear closer to the observer than they are intended to be. In the following experiment we presented virtual scenes of people and the participant's task was to locate a target individual in the visual scene. Congruent and incongruent virtual voice information, containing distance and direction location cues, were paired with the target. We found that response times were facilitated by a congruent sound. Participants were significantly worse when the sound was incongruent to the visual target in terms of either the horizontal or depth plane. Ongoing experiments are also investigating the effects of moving audio-visual stimuli on target detection in virtual scenes. Our findings suggest that a sound can have a significant influence on locating visual targets presented in depth in virtual displays and has implications for understanding crossmodal influences in spatial attention and also in the design of realistic virtual environments.
This PDF is available to Subscribers Only