Abstract
Theories of face perception and recognition are often tested in studies contrasting performance with faces to that with other objects. Tasks that compare faces to a single object category cannot attribute the difference to faces as being unique and cannot reveal differences between non-face categories. We introduce a test of object recognition, the Vanderbilt Expertise Test (VET), with the goals of measuring general object recognition ability and providing a valid measure of domain-specific performance that may reflect expertise. The VET is modeled after the Cambridge Face Memory Task (CFMT; Duchaine & Nakayama, 2006) and measures the ability to recognize visually similar exemplars from eight categories of real-world objects (blocked). In Experiment 1, 223 participants completed the VET. Principal components analysis (PCA) revealed that the categories form coherent subsets that are relatively independent of one another. Performance with leaves, owls, butterflies, wading birds, and mushrooms loaded on Factor 1 (47.8% variance), whereas cars, planes, and motorcycles loaded on Factor 2 (13.9% variance). An analysis of individual factor scores revealed an effect of participant sex: females had higher scores for Factor 1 than 2, whereas males showed the opposite pattern. In Experiment 2 (N = 26), we found evidence for convergent validity of the VET as a measure of domain-specific expertise by comparing the VET to a perceptual matching indicator used in prior work to measure domain-specific expertise. Specifically, perceptual expertise for cars and planes selectively predicted performance in the VET for cars and planes, respectively. In Experiment 3 (N = 66), the VET was used to show that object recognition abilities contribute to face recognition performance, independently of age and holistic processing. Together, our results highlight the importance of considering multiple object categories when studying individual differences and demonstrate an independent contribution of general object recognition abilities to face recognition.
Meeting abstract presented at VSS 2012