Purchase this article with an account.
Adria E. N. Hoover, Jennifer K. E. Steeves; Visual, auditory and bimodal recognition of people and cars. Journal of Vision 2009;9(8):728. doi: 10.1167/9.8.728.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We have an impressive ability to quickly recognize people from seeing their face and can also recognize identity, but not as well, from a person's voice. When both a voice and face are paired together we show an intermediate level of performance. Here we asked whether this dominance of visual information over auditory information was specific to face-voice pairs or whether this was also the case for recognition of other auditory visual associations. We specifically asked whether visual and auditory information interact differently between face-voice pairs compared to car-car horn pairs. In two separate experiments, participants learned a set of 10 visual/auditory identities—face-voice pairs and car-car horn pairs. Subsequently, participants were tested for recognition of the learned identities in three different stimulus conditions: (1) unimodal visual, (2) unimodal auditory and (3) bimodal. We then repeated the bimodal condition but instructed participants to attend to either the auditory or visual modality. Identity recognition was best for unimodal visual, followed by bimodal, which was followed by unimodal auditory conditions, for both face-voice and car-car horn pairs. Surprisingly, voice identity recognition was far worse than car horn identity recognition. In the bimodal condition where attention was directed to the visual modality there was no effect of the presence of the auditory information. When attention was directed, however, to the auditory modality the presence of visual images slowed participant responses and even more so than in the unimodal auditory condition. This effect was greater for face-voice than car-car horn pairs. Despite our vast experience with voices this yields no benefit for voice recognition over car horn recognition. These results suggest that, although visual and auditory modalities interact similarly across different classes of stimuli, the bimodal association is stronger for face-voice pairs where the bias is toward visual information.
This PDF is available to Subscribers Only