Abstract
The ability to identify people is essential for everyday social interactions. It can be quickly achieved based on identity information from cues such as a person's face and the sound of their voice. We asked how people with one eye, who have reduced visual input and altered auditory (Hoover, Harris & Steeves, 2012, EBR) and audiovisual processing (Moro & Steeves, 2011, EBR), will use face and voice information for person identity recognition. We investigated person (face and voice) and object (car and horn) identity recognition using an old/new paradigm. Participants were presented with pairs of faces and voices (Experiment 1), as well as, cars and horns (Experiment 2) and were asked to remember the identity pairings. Recognition of visual, auditory and audiovisual (congruent and incongruent pairings) identities in people with one eye were similar to binocular and monocular viewing controls. However, unlike controls, the addition of auditory information facilitated bimodal identity recognition for people with one eye but not controls. The addition of visual information facilitated bimodal object identity recognition but not bimodal person recognition for people with one eye, while controls show the opposite pattern. Binocular viewing controls had better sensitivity for congruent compared to incongruent audiovisual pairings indicating that they based their person and object recognition according to their dominant modality (vision), whereas people with one eye did not. These results indicate that people with one eye may have adaptive strategies, such as not relying on vision as the dominant modality in order to perform similarly to controls. Changes in underlying neural structure and connectivity may provide the compensatory mechanism for the loss of binocular visual input.
Meeting abstract presented at VSS 2016