Abstract
Recent work suggests that the recognition of faces and non-face objects depend on independent abilities, based on little shared variance between performance on measures of face recognition (e.g., the Cambridge Face Memory Test, CFMT) and of non-face object recognition (Wilhelm et al., 2010; Wilmer et al., 2010; Dennett et al., 2011). Gauthier et al. (VSS2013; submitted) challenged this idea, arguing that a domain-general ability (v) underlies face and object recognition, but that this ability is expressed with a category only when people have sufficient experience in that category. They collected self-ratings of experience for 8 categories, measured perceptual performance on these categories using the Vanderbilt Expertise Test (VET; McGugin et al., 2012) and on the CFMT. As experience grows, the shared variance between the CFMT and VET increased monotonically. When subjects have considerable experience with objects, if they perform poorly (well) with objects, they also perform poorly (well) on faces (Figure 1). Here we show that our neurocomputational model of face and object recognition ("The Model", (TM), Dailey & Cottrell, 1999) can account for these results. Input stimuli go through Gabor filter banks and PCA as preprocessing, followed by an error-driven artificial neural network as training. We map "domain general ability" (v) to the number of hidden units, and "experience" (E) to the number of training epochs of new objects in TM. We train on faces first, and then on another non-face category at the subordinate level. We test our model on 4 different object categories: faces, butterflies, cars and leaves. We show that the shared variance between the performance of face expert and all non-face experts increases as experience grows, which matches Gauthier et al.'s result qualitatively. Our results suggest that a potential source for variance in v between subjects is the amount of representational resources.
Meeting abstract presented at VSS 2014