Abstract
Selectivity in ventral-temporal cortex for domains of visual expertise appears to either overlap or be isomorphic with “face-areas” (FFA and OFA). We extend such findings to audition, exploring whether similar category selectivity is obtained for domains of auditory expertise, and, if so, how auditory and visual expertise interact when they are specify the same object domain. We studied local (Rhode Island) birders who are able to accurately identify individual bird species based on either appearance or bird song. We investigated how an individual birder's level of expertise correlated with neural responses (as measured by fMRI). Bird experts and novices performed two behavioral tasks: visual sequential-matching on images of cars and local birds (Gauthier et al., 2000); and audio-visual simultaneous-matching on images of birds paired with bird songs where an auditory-visual pair was drawn from either RI birds or Asian birds. Performance across these two tasks was used an index to an individual's level of expertise. Interestingly, an individual's d's across the two tasks correlated only weakly, suggesting that birders may be better at one modality or the other when recognizing birds.
These same experts and novices participated in a fMRI study in which we identified face-selective regions of interest (ROIs) using a visual “localizer” and auditory ROIs using a comparison between songs from RI birds and songs from non-avian animals. Similar to the findings of Gauthier et al. (2000), for experts, we observed comparable responses for faces and birds in the face-selective visual ROI. The auditory “localizer” revealed a cluster of voxels selective for familiar bird songs in inferior and superior temporal cortex. We also explored the cross-modal BOLD response for both bird-selective regions, that is, visual neural responses when identifying birds by song only and auditory neural responses when identifying birds by sight only.
Funded by the Perceptual Expertise Network (#15573-S6), a collaborative award from James S. McDonnell Foundation, and NSF award #BCS-0094491.