Abstract
We recognize individuals effortlessly and rapidly by looking at the face and also by hearing a person's voice. It has been suggested that visual and auditory identity recognition processes work in a similar manner (see Belin et al. 2004). Here we tested the interaction of face and voice information in identity recognition. Does bimodal information facilitate or inhibit identity recognition? Further, is recognition ability enhanced when both visual and auditory information are available in a patient who is unable to recognize faces (prosopaganosia)? SB, a 38 year old male with acquired prosopagnosia, and controls (n=10) learned the identities of three individuals consisting of a face image paired with a voice sample. Subsequently, participants were tested on two unimodal stimulus conditions: 1) faces alone, 2) voices alone, and the bimodal stimulus condition, within which new/learned faces and voices were paired in five different combinations. SB's poor identity recognition for the faces alone condition was contrasted by his excellent performance on the voices alone condition. SB improved in the bimodal conditions from his faces alone performance. Interestingly, he showed a reduction in performance in the bimodal conditions from that for voices alone. Controls showed the exact opposite pattern. These findings indicate that the control's dominant stimulus modality was vision while that for SB was audition. Identity recognition was facilitated with ‘new’ stimuli from the participant's dominant modality in the pairing but recognition was inhibited with ‘new’ stimuli from the non-preferred modality in the pairing. Most surprisingly, these results suggest that SB was unable to ignore visual face information even though he is prosopagnosic. These findings demonstrate perceptual interference from the non-dominant modality when vision and audition are combined for identity recognition and suggest interconnectivity of the visual and auditory identity pathways.