Free
Research Article  |   April 2010
Perceptual expertise with objects predicts another hallmark of face perception
Author Affiliations
Journal of Vision April 2010, Vol.10, 15. doi:10.1167/10.4.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rankin Williams McGugin, Isabel Gauthier; Perceptual expertise with objects predicts another hallmark of face perception. Journal of Vision 2010;10(4):15. doi: 10.1167/10.4.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

There is no shortage of evidence to suggest that faces constitute a special category in human perception. Surprisingly little consensus exists, however, regarding the interpretation of these results. The question persists: what makes faces special? We address this issue via one hallmark of face perception—its striking sensitivity to low-level image format—and present evidence in favor of an expertise account of the specialization of face perception. In accordance with earlier work (I. Biederman & P. Kalocsai, 1997), we find that manipulating one image into two versions that are complementary in spatial frequency (SF) and orientation information disproportionately impairs face matching relative to object matching. Here, we demonstrate that this characteristic of face processing is also found for cars, with its magnitude predicted by the observers' level of expertise with cars. We argue that the bar needs to be raised for what constitutes proper evidence that face perception is special in a manner that is not related to our expertise in this domain.

Introduction
Face perception is argued to be “special” in part on the basis of behavioral effects that distinguish it from the perception of objects. For instance, face perception suffers more than object perception when images are turned upside down (the inversion effect; Yin, 1969) and selective attention to half of a face is easier when face halves are aligned than misaligned, a composite effect (Carey & Diamond, 1994; Young, Hellawell, & Hay, 1987) that is not observed for non-face objects. Such phenomena are generally not disputed and are often taken to indicate that faces are processed in a more holistic manner than non-face objects, relying less on part decomposition. The interpretation of these findings, however, is a source of contention. One account invokes a process of specialization due to experience individuating faces (Carey, Diamond, & Woods, 1981; Curby & Gauthier, 2007; Diamond & Carey, 1986; Gauthier, Curran, Curby, & Collins, 2003; Gauthier & Tarr, 1997, 2002; Rossion, Kung, & Tarr, 2004). According to this theory, expertise with individuating objects from non-face categories would result in similar behavioral hallmarks. A competing account suggests that these effects reflect processes that are unique face perception, either due to innate constraints or to preferential exposure early in life (Kanwisher, 2000; Kanwisher, McDermott, & Chun, 1997; McKone, Kanwisher, & Duchaine, 2007). Resolving this debate is important for the study of perception and memory. If face perception is truly unique, it is reasonable to seek qualitatively different models to account for face and object recognition. In contrast, if hallmarks of face perception arise as a function of our expertise with objects, then more efforts should be devoted to the design of computational models that can account for the continuum of novice to expert perception. 
Why is there yet no resolution to this question? Although there are scores of studies contrasting face perception to novice object perception and highlighting the special character of face processing (e.g., Biederman, 1987; Tanaka & Farah, 1993; Tanaka & Sengco, 1997; Yin, 1969; Young et al., 1987), there are fewer studies directly addressing the role of perceptual expertise. Most of this latter set conclude that face-like behaviors can be obtained with both real-world and laboratory-trained objects of expertise (e.g., Diamond & Carey, 1986; Gauthier et al., 2003; Gauthier, Skudlarski, Gore, & Anderson, 2000; Gauthier & Tarr, 2002; Rossion et al., 2004; Tanaka & Curran, 2001; Xu, 2005), while a few studies report no effect of expertise (e.g., Nederhouser, Yue, Mangini, & Biederman, 2007; Robbins & McKone, 2007; Yue, Tjan, & Biederman, 2006). Nonetheless, a recent review argued that many of the published expertise effects are small or inconclusive and argues that the holistic processing characteristic of face perception is not the result of expertise (McKone et al., 2007). Various conclusions drawn in this review have since been empirically challenged. For example, a study contrasting performance for faces and cars in a short-term memory paradigm revealed a robust inversion effect for cars comparable to that observed for faces, only in car experts (Curby, Glazek, & Gauthier, 2009). Another study (Wong, Palmeri, & Gauthier, 2009) revealed that recently acquired expertise with novel objects results in a composite effect. Both inversion and composite effects have been used as measures of holistic processing and/or the related construct of configural processing (Carey & Diamond, 1994; Farah, Wilson, Drain, & Tanaka, 1998; Tanaka & Farah, 1993; Yin, 1969). Therefore, these results reinforce prior claims that holistic and configural processing are domain-general strategies adopted by perceptual experts. 
It may be reasonable to assume that other effects indexing holistic and configural processing will likewise be explained by expertise. This is important so that we do not unnecessarily reopen the debate every time the same processes are operationalized in a new task. However, there could still be measures that capture other aspects of face perception, even related to configural and/or holistic processing, that are truly independent of expertise. There is evidence for such a hallmark of face processing, which so far defies an expertise account: its marked sensitivity to manipulations of the spatial frequency (SF) content of images (Biederman & Kalocsai, 1997; Collin, Liu, Troje, McMullen, & Chaudhuri, 2004; Williams, Willenbockel, & Gauthier, 2009; Yue et al., 2006). 
Face perception is highly sensitive to SF filtering (Fiser, Subramaniam, & Biederman, 2001; Goffaux, Gauthier, & Rossion, 2003) and to other types of manipulations of image format such as contrast reversal (Gaspar, Bennett, & Sekuler, 2008; Hayes, 1988; Subramaniam & Biederman, 1997) or the use of line drawings (e.g., Bruce, Hanna, Dench, Healey, & Burton, 1992). In contrast, such manipulations hardly affect object recognition (Biederman, 1987; Biederman & Ju, 1988; Liu, Collin, Rainville, & Chaudhuri, 2000; Nederhouser et al., 2007). This led Biederman and Kalocsai (1997) to suggest that faces and objects are represented differently in the visual system. They proposed that non-face objects are encoded as structural descriptions of parts that can be recovered from images based on non-accidental properties found in an edge description of the object (Biederman, 1987). Face representations, on the other hand, would preserve SF and orientation information from V1-type cell outputs (although with translation and scale invariance), accounting for why face perception is highly sensitive to spatial manipulations. 
In a test of this hypothesis, complementary images were created by dividing the SF-by-orientation space of the raw image into an 8 × 8 radial matrix and filtering out every odd diagonal of cells to form one version of the image and every even diagonal of cells to form the second image (Biederman & Kalocsai, 1997). These two versions of the same images are complementary as they do not overlap in any specific combination of SF and orientation (Figure 1). Participants were poorer matching complementary faces relative to identical faces, while matching of chairs was not affected by this manipulation (see Collin et al., 2004, for a similar result in a different task). This SF complementation effect for faces was replicated in a recent study, although a robust SF complementation was also observed for cars, chairs, and inverted faces, albeit significantly less than that observed with upright faces (Williams et al., 2009). 
Figure 1
 
Spatial frequency (SF) and orientation filtering. First, the Fast Fourier Transform (FFT) is applied to a raw image (either face or car). Two complementary filters (8 × 8 radial matrices) are then applied to the Fourier-transformed image to preserve alternating combinations of the SF–orientation content from the raw image. The information preserved with each filter is represented by the white checkers. Finally, when returned to the spatial domain via the inverse FFT, the resulting complementary pair of images shares no overlapping combinations of SF and orientation information.
Figure 1
 
Spatial frequency (SF) and orientation filtering. First, the Fast Fourier Transform (FFT) is applied to a raw image (either face or car). Two complementary filters (8 × 8 radial matrices) are then applied to the Fourier-transformed image to preserve alternating combinations of the SF–orientation content from the raw image. The information preserved with each filter is represented by the white checkers. Finally, when returned to the spatial domain via the inverse FFT, the resulting complementary pair of images shares no overlapping combinations of SF and orientation information.
One study addressed whether the large SF complementation effect for upright faces may be due to perceptual expertise (Yue et al., 2006) by manipulating experience with novel objects called blobs. Regardless of their training experience with blobs, participants showed robust effects of complementation for faces but not blobs. A number of limitations in that study motivated us to reexamine this question. First, training with blobs has never been shown to result in any face-like behavioral effects. In fact, the only other study with these training protocol and stimuli failed to find face-like sensitivity to contrast reversal in blob experts (Nederhouser et al., 2007). This is difficult to interpret, given the many studies using laboratory-trained experts (Gauthier, Anderson, Tarr, Skudlarski, & Gore, 1997; Gauthier, Tarr, Anderson, & Gore, 1999; Gauthier, Williams, Tarr, & Tanaka, 1998; Nishimura & Maurer, 2008; Rossion, Gauthier, Goffaux, Tarr, & Crommelinck, 2002; Wong et al., 2009) and real-world experts (Busey & Vanderkolk, 2005; Gauthier & Curby, 2005; Gauthier et al., 2003; Gauthier, Skudlarski et al., 2000; Gauthier, Tarr et al., 2000; Xu, 2005) that have produced behavioral and neural face-like effects using a wide-range of stimuli. Second, in blob studies (Nederhouser et al., 2007; Yue et al., 2006) participants were tested with transfer blobs that were structurally different from the trained blobs, possibly preventing generalization of learned expertise (see Bukach, Gauthier, & Tarr, 2006 for a discussion of this issue). Finally, blobs have limited texture and minimal high SF information relative to faces, factors that could have reduced the effects of SF filtering. 
We sought to explore the role of expertise in the SF complementation effect by testing participants with a range of expertise with cars. This has important advantages. First, expertise resulting from years of experience with a category is more likely to yield a large effect size than that following a few hours of laboratory training. Second, we use a proven method to quantify perceptual expertise with cars, validated by its prediction of other face-like effects, both neurally (Gauthier et al., 2003; Gauthier, Tarr et al., 2000; Rossion, Curran, & Gauthier, 2002; Xu, 2005) and behaviorally (Curby et al., 2009; Gauthier et al., 2003). This method indexes performance in a car and bird matching task, where performance with birds in a group of participants who are not bird experts serves as a control for individual differences related to motivation and unrelated to car expertise, the variable of interest. Accordingly, a “Car Expertise Index” is defined as the difference in discriminability of cars and birds: Car d′ − Bird d′. 
In two experiments, we compared the SF complementation effect for faces and cars by asking participants to judge if pairs of sequentially presented images showed the same item. We manipulated whether the images were identical or complementary. Experiment 1 adopted an approach identical to that used in prior work (Biederman & Kalocsai, 1997; Yue et al., 2006). Specifically, stimulus pairs that were identical or complementary were randomized and, because different trials cannot be assigned to a condition (i.e., different exemplars are neither identical nor complementary in SF content), analyses focused exclusively on accuracy for same trials. In Experiment 2, we blocked identical and complementary trials so that signal detection analysis could be used to exclude differences in response biases, which can affect faces and objects differentially in this task (Williams et al., 2009). In the second experiment, faces and cars were presented both upright and upside down. If expertise with cars results in holistic processing and if holistic processing is particularly susceptible to SF manipulations (Goffaux, Hault, Michel, Vyong, & Rossion, 2005; Goffaux & Rossion, 2006, but see Cheung, Richler, Palmeri, & Gauthier, 2008), we would expect increased SF sensitivity with increased expertise. 
Methods
Participants
Experiment 1. Thirty-nine individuals (15 males, mean age 22 years) volunteered. 
Experiment 2. Forty-three individuals (18 males, mean age 21 years) who had not participated in Experiment 1 volunteered. 
All participants had normal or corrected-to-normal visual acuity. All received a small honorarium or course credit and provided written informed consent. The study was approved by the Institutional Review Board at Vanderbilt University. 
Stimuli
Experiment 1. Stimuli were digitized, eight-bit grayscale images of 72 faces with hair cropped (obtained from the Max-Planck Institute for Biological Cybernetics in Tübingen, Germany) and 72 cars (obtained from www.tirerack.com). All images were filtered with a method used in prior work (Biederman & Kalocsai, 1997; Williams et al., 2009; Yue et al., 2006): the original images were subjected to a Fast Fourier Transform (FFT) and filtered by two complementary filters (Figure 1). Each filter eliminated the highest (above 181 cycles/image) and lowest (below 12 cycles/image, corresponding to approximately 7.5 cycles per face width (c/fw)) spatial frequencies. The remaining area of the Fourier domain was divided into an 8-by-8 matrix of 8 orientations (increasing in successive steps of 22.5 degrees) by 8 SFs (covering four octaves in steps of 0.5 octaves). This manipulation created two complementary pairs of images, whereby every other of the 32 frequency–orientation combinations in a radial checkerboard pattern in the Fourier domain was ascribed to one image, and the remaining combinations were assigned to the complementary member of that pair. As such, both complementary members of a pair contained all 8 SFs and all 8 orientations but in unique combinations. The two complementary images shared no common information in the Fourier domain. Filtered images were then converted back to images in the spatial domain via the inverse FFT. The final stimuli were resized to two formats, with images subtending either 2° or 4° visual angle. 
Experiment 2. The same images as in Experiment 1 were used, in their upright and inverted (flipped in the vertical, up–down direction) versions. 
Matching task
Experiment 1. We used a 2 × 2 within-participant factorial design, manipulating category (face, car) and SF–orientation content (identical, complementary). A total of 1152 trials was arranged into 6 blocks by category: 3 face blocks and 3 car blocks of 192 trials each. Block order was randomized across subjects, and breaks were offered every 64 trials. Participants began with eight practice trials selected randomly from all possible face and car trials. On each trial, participants judged whether a pair of sequentially presented images (either two faces or two cars) was of the same identity. Relative to the study image, the probe image could be (1) the same identity and the same SF (i.e., the exact image), (2) the same identity and a complementary SF, or (3) a different exemplar altogether, though also filtered to contain only alternating SF–orientation components. Participants were instructed to make their judgments based on identity alone, regardless of differences in image size or SF content (described to subjects as “blurriness”). Each trial began with a 500-ms fixation cross, followed by a target stimulus (face or car) in the center of the screen for 200 ms. After a 300-ms interstimulus interval a probe stimulus appeared for 200 ms. Participants had to make a same/different judgment on this image within 1800 ms. All images were presented at the center of the screen and image size was selected randomly for each stimulus (either 2° visual angle or 4° visual angle) to prevent image matching (Yue et al., 2006). 
Experiment 2. We used a 2 × 2 × 2 repeated-measures design, manipulating (1) category (face or car), (2) SF–orientation content (identical or complementary), and (3) orientation (upright or inverted). The procedure differed from Experiment 1 in three ways. First, the orientation of stimuli varied randomly across trials (both stimuli within a trial were always of the same orientation). Second, image size always differed from study to probe (2° to 4° or 4° to 2°), thereby eliminating cases where study and probe could randomly occur at the same size, so no part of the effect could be attributed to image matching. Third, trials were blocked according to SF content (identical or complementary) rather than stimulus category (face or car); hence, different trials could be assigned to either the identical or complementary condition, allowing for the computation of discriminability ( d′) and response criterion ( C) (provided as supporting online information). 
A total of 1152 trials was grouped into 6 blocks: 3 blocks of identical SF–orientation pairs and 3 blocks of complementary SF–orientation pairs, where each block contained 192 trials. Stimulus category and orientation varied randomly within a block, allowing 48 trials per block (or 288 trials total) for each condition (i.e., upright faces, upright cars, inverted faces, and inverted cars). Block order was randomized across subjects. Each subject began with 12 practice trials, and breaks were offered every 64 trials. 
Expertise test
Following the matching task with filtered images, participants in both experiments completed a test of car expertise to quantify their skill at matching cars (Curby et al., 2009; Gauthier, Curby, Skudlarski, & Epstein, 2005; Gauthier, Skudlarski et al., 2000; Grill-Spector, Knouf, & Kanwisher, 2004; Rossion et al., 2004; Xu, 2005). Participants made same/different judgments on car images (at the level of make and model, regardless of year) and on bird images (at the level of species). There were 112 trials for each object category. On each trial, the first stimulus appeared for 1000 ms, followed by a 500-ms mask. A second stimulus then appeared and remained visible until a same/different response was made or 5000 ms elapsed. 
A separate sensitivity score was calculated for cars (Car d′) and birds (Bird d′). The difference between these measures (Car d′ − Bird d′) yields a Car Expertise Index for each participant. Performance with birds provides a baseline for individual differences in motivation or attention that would not be due to experience with cars. Figure 2 shows the distribution of car d′ and bird d′ scores for each experiment. As we did not screen participants for experience with birds, we also report the results for a subset of our sample, excluding participants whose performance with birds may suggest a moderate level of experience with birds (those with Bird d′ > 1: n = 8 out of 39 in Experiment 1; n = 10 out of 43 in Experiment 2). 
Figure 2
 
Distribution of car d′ and bird d′ values. (a) Experiment 1 ( N = 39). Scatterplot showing the correlation between car d′ (SD = 0.83) and bird d′ (SD = 0.24) in Experiment 1: r = 0.18, p = n.s. (b) Experiment 2 ( N = 43). Scatterplot showing the correlation between car d′ (SD = 0.56) and bird d′ (SD = 0.31) in Experiment 2: r = 0.08, p = n.s.
Figure 2
 
Distribution of car d′ and bird d′ values. (a) Experiment 1 ( N = 39). Scatterplot showing the correlation between car d′ (SD = 0.83) and bird d′ (SD = 0.24) in Experiment 1: r = 0.18, p = n.s. (b) Experiment 2 ( N = 43). Scatterplot showing the correlation between car d′ (SD = 0.56) and bird d′ (SD = 0.31) in Experiment 2: r = 0.08, p = n.s.
Results
Experiment 1. We replicated the advantage of complementation for faces over cars with accuracy (or hit rates; Biederman & Kalocsai, 1997; Collin et al., 2004; Yue et al., 2006; Figure 3a). A 2 × 2 ANOVA on accuracy for same trials revealed better performance for cars than faces (F1,38 = 47.32, p < 0.0001), better performance on identical than complementary trials (F1,38 = 423.81, p < 0.0001), and an interaction between Category and SF content (F1,38 = 179.76, p < 0.0001). Bonferroni post hoc tests (per-comparison alpha (αPC) = 0.0125) showed that the superior performance for cars was driven by performance in complementary trials (p < 0.0001), with a non-significant difference between cars and faces in identical trials (p = 0.26). Although the SF complementation effect (accuracy on identical pairs > accuracy on complementary pairs) was significant for both cars and faces, an ANOVA computed directly on SF complementation values (identical − complementary) revealed a larger effect of complementation for face matching relative to car matching (F1,38 = 179.76, p < 0.0001). 
Figure 3
 
Experiment 1 results ( N = 39). (a) Mean accuracy values for the same–different matching of identical and complementary faces and cars. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Identical trials − accuracy on Complementary trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 (n = 8 out of 39). The linear regression is calculated considering the remaining participants ( n = 31) and shows a significant positive correlation ( r = 0.42, p < 0.05).
Figure 3
 
Experiment 1 results ( N = 39). (a) Mean accuracy values for the same–different matching of identical and complementary faces and cars. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Identical trials − accuracy on Complementary trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 (n = 8 out of 39). The linear regression is calculated considering the remaining participants ( n = 31) and shows a significant positive correlation ( r = 0.42, p < 0.05).
We also compared the profile of results when the image size was the same within a matching trial (2 deg to 2 deg or 4 deg to 4 deg) versus when it was different (2 deg to 4 deg or 4 deg to 2 deg). First we recalculated the ANOVA on accuracy from same-identity matching trials to introduce a new factor, Size, with two levels: same and different. There was no difference in the effect of complementation for same- and different-size trials ( F 1,38 = 0.935, p = n.s.). In addition, we computed separate ANOVAs for same- and different-size trials, observing no qualitative differences across conditions: same-size trials ( F face>car(1,38) = 100.06, p < 0.001; F id>comp(1,38) = 380.52, p < 0.001; F interaction(1,38) = 137.76, p < 0.001) and different-size trials ( F face>car(1,38) = 9.28, p < 0.01; F id>comp(1,38) = 286.95, p < 0.001; F interaction(1,38) = 128.95, p < 0.001). These results suggest that low-level image-based matching cannot explain the observed complementation effect. 
Moreover, by correlating the magnitude of each individual's complementation effect for cars and faces with his or her Car Expertise Index, we show that car expertise is associated with the magnitude of the SF complementation effect for cars, while it does not predict the same effect for faces ( Table 1, Figure 3b). This expertise effect is of comparable magnitude whether we use the bird baseline or not to quantify individual differences in expertise (i.e., Car Expertise Index versus Car d′, respectively). The correlation grows when we restrict the range of performance on the matching task with birds, removing subjects whose performance suggests a moderate level of bird expertise, despite the consequence of a smaller sample size. Interestingly, this does not depend on the use of the bird baseline in our Expertise Index: the improvement exists even when we use Car d′ to quantify expertise but exclude these participants with high bird scores. This is inconsistent with the idea that the car expertise of participants with elevated Bird d′ could be underestimated when we compute the Expertise Index (Williams et al., 2009). Instead, some participants with relatively high bird-matching scores may use a qualitatively different strategy than most when matching any visually similar objects, thereby obtaining car-matching scores that reflect an advantage that is not due to experience. 
Table 1
 
Correlation, r, between the complementation effect (performance on identical trials − performance on complementary trials) and an independent measure of car sensitivity (Car d′ or Delta d′). For each condition of both experiments, correlations are given for a subpopulation of participants, as well as the entire population.
Table 1
 
Correlation, r, between the complementation effect (performance on identical trials − performance on complementary trials) and an independent measure of car sensitivity (Car d′ or Delta d′). For each condition of both experiments, correlations are given for a subpopulation of participants, as well as the entire population.
Bird d′ < 1 All participants
Car d Car − Bird Delta d Car d Car − Bird Delta d
Experiment 1
  Cars upright 0.41* 0.42* 0.35* 0.32 #
  Faces upright 0.26 0.26 0.21 0.20
Experiment 2
  Cars upright 0.36* 0.35* 0.19 0.14
  Cars inverted 0.20 0.21 0.03 −0.05
  Faces upright −0.06 0.04 −0.14 −0.09
  Faces inverted −0.15 −0.12 −0.11 −0.18
 

# p = 0.05, * p < 0.05.

These results are reported in the form of difference scores (i.e., Car d′ − Bird d′, and Identical d′ − Complementary d′), as consistent with prior published work using the expertise index (Curby et al., 2009; Curby & Gauthier, 2007; Gauthier et al., 2003, 1999; Gauthier, Tarr et al., 2000; Rossion, Gauthier et al., 2002; Tanaka & Curran, 2001; Tanaka & Taylor, 2001; Tarr & Gauthier, 2000), and the Complementation index (Williams et al., 2009). We considered whether normalizing these values would influence our results and include in Table 2 the correlation between these indices when normalized in the following manner: Expertise index = (Car d′ − Bird d′) / (Car d′ + Bird d′), and Complementation effect = (Identical d′ − Complementary d′) / (Identical d′ + Complementary d′). The results were not qualitatively different using normalized indices. However, because of prior studies in which the expertise index as a difference score resulted in very high correlations (∼0.9) with fMRI activation in the FFA (e.g., Gauthier et al., 2005) and the fact that normalization of these measures actually resulted in less normal distribution (not shown here), we provisionally argue that difference scores represent better measures. 
Table 2
 
Correlation, r, between the normalized complementation effect [(Identical − Complementary) / (Identical + Complementary)] and an independent measure of car sensitivity: Car d′ or normalized car expertise [(Car d′ − Bird d′) / (Car d′ + Bird d′)]. For each condition of both experiments, correlations are given for a subpopulation of participants, as well as for the entire population.
Table 2
 
Correlation, r, between the normalized complementation effect [(Identical − Complementary) / (Identical + Complementary)] and an independent measure of car sensitivity: Car d′ or normalized car expertise [(Car d′ − Bird d′) / (Car d′ + Bird d′)]. For each condition of both experiments, correlations are given for a subpopulation of participants, as well as for the entire population.
Bird d′ < 1 All participants
Car d ( C − B ) ( C + B ) Car d ( C − B ) ( C + B )
Experiment 1: Normalized complementation effect: (IC) / (I + C)
Cars upright 0.34 ( p = 0.06) 0.37* 0.22 0.18
Faces upright 0.14 0.18 0.06 0.06
Experiment 2: Normalized, adjusted complementation effect: [(I + 1)(C + 1)] / [(I + 1) + (C + 1)]
Cars upright 0.35* 0.28 0.18 0.10
Cars inverted 0.09 0.04 −0.06 −0.19
Faces upright −0.04 0.18 −0.11 0.06
Faces inverted −0.17 −0.18 −0.12 −0.17
 

* p < 0.05.

Experiment 2. We sought to replicate the results from Experiment 1 with two key changes. First, the SF complementation effect was measured using d′ for all trials (rather than accuracy on same trials; Cheung et al., 2008). This allows us to control for potential response biases that individuals may have toward certain trial conditions and/or object categories. Second, we manipulated stimulus orientation to investigate the boundary conditions of the expertise effect. As before, we also consider whether removing participants with high bird-matching scores increases the expertise effect. 
A 2 × 2 × 2 ANOVA on d′ (within-subject factors: Category (face or car), SF content (identical or complementary), and Orientation (upright or inverted), all of two levels) showed that faces led to better matching performance than cars ( F 1,42 = 14.08, p = 0.0005), identical pairs were easier to match than complementary pairs ( F 1,42 = 182.88, p < 0.0001), and performance on upright trials was greater than inverted trials ( F 1,42 = 205.30, p < 0.0001; Figure 4a). Following up on the Category × SF content interaction ( F 1,42 = 69.35, p < 0.0001) and the Category × Orientation interaction ( F 1,42 = 18.37, p = 0.008) using Bonferroni post hoc tests ( α PC = 0.0125), we found that the superior scores for face matching could be attributed to better performance on identical trials and upright trials compared with complementary trials or inverted trials, respectively. We further observed a three-way interaction between Category, SF content, and Orientation ( F 1,42 = 6.97, p = 0.012), which we explored with post hoc tests ( α PC = 0.00625). The effect of SF complementation was significant in all four conditions (upright and inverted faces and cars), and performance with faces was only better than with cars for upright identical trials. 
Figure 4
 
Experiment 2 results ( N = 43). (a) Mean d′ values for the same–different matching of identical and complementary faces and cars in their upright and inverted orientations. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Complementary trials subtracted from accuracy on Identical trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 ( n = 10 out of 43). The linear regression calculated for the remaining participants ( n = 33) shows a significant positive correlation ( r = 0.35, p < 0.05).
Figure 4
 
Experiment 2 results ( N = 43). (a) Mean d′ values for the same–different matching of identical and complementary faces and cars in their upright and inverted orientations. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Complementary trials subtracted from accuracy on Identical trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 ( n = 10 out of 43). The linear regression calculated for the remaining participants ( n = 33) shows a significant positive correlation ( r = 0.35, p < 0.05).
A 2 × 2 ANOVA computed on SF complementation scores (identical − complementary) confirmed the greater sensitivity of faces relative to cars ( F 1,42 = 74.68, p < 0.0001) and upright images relative to inverted images ( F 1,42 = 7.19, p = 0.01). We explored the interaction effect ( F 1,42 = 8.06, p = 0.007) with post hoc tests ( α PC = 0.00625), finding a larger effect of SF complementation for upright faces relative to the other three categories (i.e., inverted faces and upright and inverted cars). Other than the car orientation comparison (i.e., upright cars − inverted cars), the complementation effect was significant in all Category × Orientation comparisons. 
We again assessed the effect of expertise on the magnitude of the complementation effect. As in Experiment 1, correlations with the complementation effect are virtually identical whether we define car expertise using Car d′ by itself or the Car Expertise Index, where Bird d′ is subtracted from Car d′ ( Table 1, Figure 4b). We also replicate the finding of a larger influence of car expertise on the complementation effect for upright cars when we exclude participants with high bird scores ( d′ greater than 1, n = 10 out of 43). With a sample of participants varying in car expertise (0.31–2.59) but limited in their performance with birds (0.18–1), car expertise correlates with the magnitude of the complementation effect for upright cars ( Figure 4b), but not for inverted cars or faces in either orientation. In both our experiments, tests using the external studentized residuals on data sets that either included or excluded participants with high bird scores failed to reveal any significant outlier. 
As in Experiment, 1, the relationship between complementation and expertise was not qualitatively different using normalized measures (see Table 2). Because there were a few negative scores in the experimental results in Experiment 2 (<1.5% trials), we added a constant of 1 to all values before calculating the normalized complementation index. 
Discussion
We found that the level of expertise with cars can predict the magnitude of the SF complementation effect. This represents a surprising perceptual deficit in car experts, especially since they would have known the names for most of the cars and would therefore have had access to a verbal code in addition to visual short-term memory. Despite the advantages associated with expertise, however, the perception of objects of expertise was more sensitive to the specific SF content in the image. Our results suggest that the large effect of complementation for upright faces results from our expertise with this category. 
This result stands in contrast to prior conclusions (Yue et al., 2006), though several explanations exist for why this earlier study was less sensitive to an expertise effect. In particular, the previous study relied on laboratory-trained participants with relatively weaker expertise than real-world experts and did not quantify the expertise of individual participants. Indeed, even in our real-world experts, the correlations between expertise and the SF complementation effect were not large. This is not surprising, because prior work suggests that the magnitude of the complementation effect is also influenced by factors independent of expertise, such as the symmetry of the images (Yue et al., 2006). 
Why are experts more sensitive to SF content than novices? We introduced the complementation effect within its original framework (Biederman & Kalocsai, 1997), in which the initial null result in the complementation paradigm with non-face objects led to the claim that only face representations include SF and orientation information. However, since then it has been shown that even novices with objects like cars or chairs (even inverted cars and chairs) can display significant SF complementation effects (Williams et al., 2009), suggesting that differences between face and object representations' sensitivity to SF information may not be qualitative. While it is not surprising that identical images of the same object are more easily matched than complementary images that vary considerably, it is less intuitive that matching of complementary images is even more difficult for experts. However, other paradigms measuring selective attention demonstrate that experts find it more difficult than novices to ignore a part of the image that they are told is task-irrelevant (Gauthier & Curby, 2005; Gauthier & Tarr, 1997; Gauthier et al., 2003, 1998; Hole, 1994; Tanaka & Farah, 1993; Young et al., 1987). Observers matching our filtered stimuli are trying to ignore differences caused by the filter and trying to match on the basis of the true underlying shape. As in other paradigms, experts find it particularly difficult to ignore irrelevant information. 
Such a failure of selective attention could occur at a perceptual locus (similar to what was originally proposed for the SF complementation effect). For instance, expert representations may be more Gabor-like (Biederman & Kalocsai, 1997) or holistic (Tanaka & Farah, 1993) than novice representations and image transformations—such as our filters—may be particularly hard to ignore in the encoding of these representations. However, the same effect could also have a more decisional locus if, for instance, experts have developed through experience an ingrained assumption that no part of two objects differs noticeably without the two objects actually being different. This question concerning the locus of holistic processing and similar effects has only recently been addressed directly, with proponents of both accounts (perceptual: Farah et al., 1998; McKone et al., 2007; Robbins & McKone, 2007; decisional: Richler, Gauthier, Wenger, & Palmeri, 2008; Wenger & Ingvalson, 2002). 
While awaiting resolution on this particular issue, we can offer the following explanation of our results: to an expert visual system trained to make fine discriminations, two complementary images represent inputs that are highly likely to signify two similar but distinct individuals. While we instruct our participants to ignore the transformation imposed by the complementary filters, experts appear to instinctively attend to or process, and consequently be influenced by, differences between images that would normally suggest distinct object identities. 
Conclusion
This study offers evidence that the SF complementation effect increases as a function of expertise with a category and, thus, may be especially large for faces because of our expertise in this domain. 
How does the evidence stand on whether face perception differs qualitatively from object perception? Several hallmarks of face perception have at least sometimes been found to depend on perceptual expertise. This is the case for the inversion effect (Curby et al., 2009; Diamond & Carey, 1986), holistic processing (Gauthier, Skudlarski et al., 2000; Gauthier & Tarr, 1997, 2002; Gauthier et al., 1998), configural processing (Busey & Vanderkolk, 2005), increased performance in categorizing individuals (Gauthier, Skudlarski et al., 2000; Gauthier, Tarr et al., 2000; Tanaka & Taylor, 2001), and sensitivity to SF information, as demonstrated here. In contrast, evidence suggesting that face perception nonetheless relies on face-specific mechanisms comes from studies with either (1) larger effects in faces than in objects of expertise, or (2) null effects of expertise in certain hallmarks of face perception. This work on an effect once thought to be unique to faces, then shown to be larger for faces than objects and for which prior tests of expertise rejected the role of experience, offers an opportunity to consider, and reject, these two arguments. 
First, given the significant linear relationship between expertise and many behavioral and neural hallmarks of face processing, the modularity of face perception cannot be supported in any strong way solely by evidence that an effect is larger for faces than other objects. The reason is simple: without a way to match the strength of expertise in another domain to that for faces, comparisons of the magnitude of an effect for faces vs. objects are meaningless. Consider that in this study, the mean complementation effect for faces (a difference of approximately 40% in Experiment 1 and 1 Δ d′ in Experiment 2) falls near the upper limit obtained by our best car experts ( Figures 3b and 4b). Thus, to argue that the magnitude of the face effect can be explained by expertise would only require the assumption that the average level of face expertise in our participants is at least comparable to the car expertise of our best car experts. This appears plausible given the time most of us devote to face perception in a lifetime. Unfortunately, many claims for the special nature of face perception rest on the interpretation of such quantitative differences (e.g., Bruce, Doyle, Dench, & Burton, 1991; Farah et al., 1998; Haig, 1984; Hosie, Ellis, & Haig, 1988; Yin, 1969). 
Second, when evaluating the expertise account, our findings caution against over-interpretation of null effects, because they are based on specific operational definitions of expertise. Beyond typical concerns raised in the framework of null hypothesis significance testing, an important issue is that the power of a theoretical construct (experience) is assessed with specific measures of expertise. Here, we used a measure of expertise that predicts other hallmarks of face perception in behavioral studies (Curby et al., 2009; Curby & Gauthier, 2007), functional MRI studies (Gauthier et al., 1999; Gauthier, Tarr et al., 2000; Tarr & Gauthier, 2000), and electrophysiological studies (Gauthier et al., 2003; Rossion, Gauthier et al., 2002; Tanaka & Curran, 2001; Tanaka & Taylor, 2001). Few alternatives to this method of quantifying expertise have been tested and studies that do not use this approach often revert to the less statistically powerful contrast of two groups of experts and novices, based on self-report or some other subjective criterion. However, expertise may be a matter of degree regardless of domain; in fact, growing evidence highlights even a broad distribution of face recognition abilities in the general population (e.g., Russell, Duchaine, & Nakayama, 2009). Compared to other fields dealing with individual differences, work on expertise is still in its infancy and measures of expertise are clearly imperfect. For instance, our finding that car expertise effects are more pronounced when participants with high bird-matching scores are removed (even when only car d′ is used as a predictor) suggests that quantifying expertise in a given domain would likely benefit from a sampling of performance across more than two domains. On the one hand, better performance for cars and birds relative to many other domains could reflect expertise in both domains. On the other hand, an observer who performs very well with cars and birds, but just as well as with several other domains is unlikely to qualify as a genuine expert. He or she may instead score high on a general factor relevant to visual perception (similar to “g” for intelligence). Underestimating these challenges of measurement can reduce expertise effects and even lead to null effects. However, critically, these problems are limitations of our measurements of expertise, not of the underlying expertise account of face specialization. 
It is important to consider the cost of wrongly assuming that faces are special. Such a conclusion discourages the search for models that can account for both novice and expert performance in any domain. It creates subfields of researchers less likely to influence each other's work. The suggestion that face perception differs qualitatively from that of other objects for reasons that have nothing to do with experience is a strong claim that requires strong evidence. Any domain-specific model of face perception needs to account for why expertise can predict some putatively face-specific effects (e.g., recruitment of the fusiform gyrus, holistic processing, shift of the entry level, SF complementation effect). If it cannot, it should at least present evidence of a new hallmark of face processing that cannot be explained by expertise under conditions where expertise can still predict these other effects. Therefore, we leave open the possibility that face perception is special in some as yet undetermined way but propose that the criteria for accepting this possibility be raised substantially relative to current standards. 
Supplementary Materials
WilliamsGauthier_Supplement - WilliamsGauthier_Supplement 
Acknowledgments
This work was supported by the Temporal Dynamics of Learning Center (NSF Science of Learning Center SBE-0542013) and by a grant from James S. McDonnell Foundation to the Perceptual Expertise Network. 
Commercial relationships: none. 
Corresponding author: Rankin Williams McGugin. 
Email: Rankin.Williams@vanderbilt.edu. 
Address: Department of Psychology, Vanderbilt University, Nashville, TN 37203, USA. 
References
Biederman I. (1987). Recognition by components: A theory of human image understanding. Psychological Review, 94, 115–147. [PubMed] [Article] [CrossRef] [PubMed]
Biederman I. Ju G. (1988). Surface vs edge-based determinants of visual recognition. Cognitive Psychology, 20, 38–64. [CrossRef] [PubMed]
Biederman I. Kalocsai P. (1997). Neurocomputational bases of object and face recognition. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 352, 1203–1219. [PubMed] [Article] [CrossRef]
Bruce V. Doyle T. Dench N. Burton M. (1991). Remembering facial configurations. Cognition, 38, 109–144. [PubMed] [CrossRef] [PubMed]
Bruce V. Hanna E. Dench N. Healey P. Burton M. (1992). The importance of “mass” in line drawings of faces. Applied Cognitive Psychology, 6, 619–628. [CrossRef]
Bukach C. M. Gauthier I. Tarr M. J. (2006). Beyond faces and modularity: The power of an expertise framework. Trends in Cognitive Science, 10, 159–166. [PubMed] [CrossRef]
Busey T. A. Vanderkolk J. R. (2005). Behavioral and electrophysiological evidence for configural processing in fingerprint experts. Vision Research, 45, 431–448. [PubMed] [CrossRef] [PubMed]
Carey S. Diamond R. (1994). Are faces perceived as configurations more by adults than by children? Visual Cognition, 1, 253–274. [CrossRef]
Carey S. Diamond R. Woods B. (1981). Development of face perception: A maturational component? Developmental Psychology, 16, 257–269. [CrossRef]
Cheung O. Richler J. J. Palmeri T. J. Gauthier I. (2008). Revisiting the role of spatial frequencies in the holistic processing of faces. Journal of Experimental Psychology: Human Perception and Performance, 34, 1327–1336. [PubMed] [CrossRef] [PubMed]
Collin C. A. Liu C. H. Troje N. F. McMullen P. A. Chaudhuri A. (2004). Face recognition is affected by similarity in spatial frequency range to a greater degree than within category object recognition. Journal of Experimental Psychology: Human Perception and Performance, 30, 975–987. [PubMed] [CrossRef] [PubMed]
Curby K. M. Gauthier I. (2007). A visual short-term memory advantage for faces. Psychonomic Bulletin & Review, 14, 620–628. [PubMed] [Article] [CrossRef] [PubMed]
Curby K. Glazek K. Gauthier I. (2009). Perceptual expertise increases visual short-term memory capacity. Journal of Experimental Psychology: Human Perception and Performance, 35, 94–107. [CrossRef] [PubMed]
Diamond R. Carey S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. [PubMed] [CrossRef] [PubMed]
Farah M. J. Wilson K. D. Drain H. M. Tanaka J. W. (1998). What is “special” about face perception? Psychological Review, 105, 482–498. [PubMed] [CrossRef] [PubMed]
Fiser J. Subramaniam S. Biederman I. (2001). Size tuning in the absence of spatial frequency tuning in object recognition. Vision Research, 41, 1931–1950. [PubMed] [CrossRef] [PubMed]
Gaspar C. M. Bennett P. J. Sekuler A. B. (2008). The effects of face inversion and contrast-reversal on efficiency and internal noise. Vision Research, 48, 1084–1095. [PubMed] [CrossRef] [PubMed]
Gauthier I. Anderson A. W. Tarr M. J. Skudlarski P. Gore J. C. (1997). Levels of categorization in visual recognition studied using functional resonance imaging. Current Biology, 7, 645–651. [PubMed] [CrossRef] [PubMed]
Gauthier I. Curby K. M. (2005). A perceptual traffic-jam on highway N170: Interference between face and car expertise. Current Directions Psychological Science, 14, 30–33. [CrossRef]
Gauthier I. Curby K. M. Skudlarski P. Epstein R. (2005). Activity of spatial frequency channels in the fusiform face-selective area relates to expertise in car recognition. Cognitive and Affective Behavioral Neuroscience, 5, 222–234. [CrossRef]
Gauthier I. Curran T. Curby K. M. Collins D. (2003). Perceptual interference supports a non-modular account of face processing. Nature Neuroscience, 6, 428–432. [PubMed] [CrossRef] [PubMed]
Gauthier I. Skudlarski P. Gore J. C. Anderson A. W. (2000). Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience, 3, 191–197. [PubMed] [CrossRef] [PubMed]
Gauthier I. Tarr M. J. (1997). Becoming a “Greeble” expert: Exploring mechanisms for recognition. Vision Research, 37, 1682–1682. [PubMed] [CrossRef]
Gauthier I. Tarr M. J. (2002). Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception and Performance, 28, 431–446. [PubMed] [CrossRef] [PubMed]
Gauthier I. Tarr M. J. Anderson A. Gore J. (1999). Activation of the middle fusiform “face area” increases with experience in recognizing novel objects. Nature Neuroscience, 2, 568–573. [PubMed] [CrossRef] [PubMed]
Gauthier I. Tarr M. J. Moylan J. Skudlarski P. Gore J. C. Anderson A. W. (2000). The fusiform “face area” is part of a network that processes faces at individual level. Journal of Cognitive Neuroscience, 12, 495–504. [PubMed] [CrossRef] [PubMed]
Gauthier I. Williams P. Tarr M. J. Tanaka J. (1998). Training “Greeble” experts: A framework for studying expert object recognition processes. Vision Research, 38, 2401–2428. [PubMed] [CrossRef] [PubMed]
Goffaux V. Gauthier I. Rossion B. (2003). Spatial scale contribution to early visual differences between face and object processing. Cognitive Research, 16, 416–424. [PubMed] [CrossRef]
Goffaux V. Hault B. Michel C. Vuong Q. C. Rossion B. (2005). The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception, 34, 77–86. [PubMed] [CrossRef] [PubMed]
Goffaux V. Rossion B. (2006). Faces are “spatial”-holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception and Performance, 32, 1023–1039. [PubMed] [CrossRef] [PubMed]
Grill-Spector K. Knouf N. Kanwisher N. (2004). The fusiform face area subserves face perception, not generic within category identification. Nature Neuroscience, 7, 555–562. [PubMed] [CrossRef] [PubMed]
Haig N. D. (1984). The effect of feature displacement on face recognition. Perception, 13, 505–512. [PubMed] [CrossRef] [PubMed]
Hayes A. (1988). Identification of two-tone images: Some implications for high- and low-spatial-frequency processes in human vision. Perception, 17, 429–436. [PubMed] [CrossRef] [PubMed]
Hole G. J. (1994). Configurational factors in the perception of unfamiliar faces. Perception, 23, 65–74. [PubMed] [CrossRef] [PubMed]
Hosie J. A. Ellis H. D. Haig N. D. (1988). The effect of feature displacement on the perception of well-known faces. Perception, 17, 461–474. [PubMed] [CrossRef] [PubMed]
Kanwisher N. (2000). Domain specificity in face perception. Nature Neuroscience, 3, 759–763. [PubMed] [CrossRef] [PubMed]
Kanwisher N. McDermott J. Chun M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal Neuroscience, 17, 4302–4311. [PubMed] [Article]
Liu C. H. Collin C. A. Rainville S. J. M. Chaudhuri A. (2000). The effects of spatial frequency overlap on face recognition. Journal of Experimental Psychology: Human Perception and Performance, 29, 729–743. [PubMed]
McKone E. Kanwisher N. Duchaine B. C. (2007). Can generic expertise explain special processing for faces? Trends in Cognitive Sciences, 11, 8–15. [PubMed] [CrossRef] [PubMed]
Nederhouser M. Yue X. Mangini M. C. Biederman I. (2007). The effect of contrast reversal on recognition is unique to faces, not objects. Vision Research, 47, 2134–2142. [PubMed] [CrossRef] [PubMed]
Nishimura M. Maurer D. (2008). The effect of categorization on sensitivity to second-order relations in novel objects. Perception, 37, 584–601. [PubMed] [CrossRef] [PubMed]
Richler J. J. Gauthier I. Wenger M. J. Palmeri T. J. (2008). Holistic processing of faces: Perceptual and decisional components. Journal of Experimental Psychology: Learning, Memory and Cognition, 34, 328–342. [PubMed] [CrossRef]
Robbins R. McKone E. (2007). No face-like processing for objects-of-expertise in three behavioral tasks. Cognition, 103, 34–79. [CrossRef] [PubMed]
Rossion B. Curran T. Gauthier I. (2002). A defense of the subordinate-level account for the N170 component. Cognition, 85, 189–196. [PubMed] [CrossRef] [PubMed]
Rossion B. Gauthier I. Goffaux V. Tarr M. J. Crommelinck M. (2002). Expertise training with novel objects leads to left-lateralized face-like electrophysiological responses. Psychological Science, 13, 250–257. [PubMed] [CrossRef] [PubMed]
Rossion B. Kung C. C. Tarr M. J. (2004). Visual expertise with nonface objects leads to competition with the early perceptual processing of faces in human occipitotemporal cortex. Proceedings of the National Academy of Sciences, 101, 14521–14526. [PubMed] [Article] [CrossRef]
Russell R. Duchaine B. Nakayama K. (2009). Super-recognizers: People with extraordinary face recognition ability. Psychonomic Bulletin and Review, 16, 252–257. [PubMed] [CrossRef] [PubMed]
Subramaniam S. Biederman I. (1997). Does contrast reversal affect object identification? Investigative Ophthalmology and Visual Science, 38, 998.
Tanaka J. W. Curran T. (2001). A neural basis for expert object recognition. Psychological Science, 12, 43–47. [PubMed] [CrossRef] [PubMed]
Tanaka J. W. Farah M. J. (1993). Parts and wholes in face recognition. Quarterly Journal Experimental Psychology, 46, 225–245. [PubMed] [CrossRef]
Tanaka J. W. Sengco J. A. (1997). Features and their configuration in face recognition. Memory & Cognition, 25, 583–592. [PubMed] [CrossRef] [PubMed]
Tanaka J. W. Taylor M. (2001). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23, 457–482. [CrossRef]
Tarr M. J. Gauthier I. (2000). FFA: A flexible fusiform area for subordinate-level visual processing automatized by expertise. Nature Neuroscience, 3, 764–769. [PubMed] [CrossRef] [PubMed]
Wenger M. J. Ingvalson E. M. (2002). A decisional component of holistic encoding. Journal of Experimental Psychology: Learning, Memory and Cognition, 28, 872–892. [PubMed] [CrossRef]
Williams N. R. Willenbockel V. Gauthier I. (2009). Sensitivity to spatial frequency content is not specific to face perception. Vision Research, 49, 2353–2362. [CrossRef] [PubMed]
Wong A. C. Palmeri T. J. Gauthier I. (2009). Conditions for face-like expertise with objects: Becoming a Ziggerin expert—But which type? Psychological Science, 20, 1108–1117. [PubMed] [CrossRef] [PubMed]
Xu Y. (2005). Revisiting the role of the fusiform face area in visual expertise. Cerebral Cortex, 15, 1234–1242. [PubMed] [CrossRef] [PubMed]
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Young A. W. Hellawell D. Hay D. (1987). Configural information in face perception. Perception, 10, 747–759. [PubMed] [CrossRef]
Yue X. Tjan B. S. Biederman I. (2006). What makes faces special? Vision Research, 46, 3802–3811. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Spatial frequency (SF) and orientation filtering. First, the Fast Fourier Transform (FFT) is applied to a raw image (either face or car). Two complementary filters (8 × 8 radial matrices) are then applied to the Fourier-transformed image to preserve alternating combinations of the SF–orientation content from the raw image. The information preserved with each filter is represented by the white checkers. Finally, when returned to the spatial domain via the inverse FFT, the resulting complementary pair of images shares no overlapping combinations of SF and orientation information.
Figure 1
 
Spatial frequency (SF) and orientation filtering. First, the Fast Fourier Transform (FFT) is applied to a raw image (either face or car). Two complementary filters (8 × 8 radial matrices) are then applied to the Fourier-transformed image to preserve alternating combinations of the SF–orientation content from the raw image. The information preserved with each filter is represented by the white checkers. Finally, when returned to the spatial domain via the inverse FFT, the resulting complementary pair of images shares no overlapping combinations of SF and orientation information.
Figure 2
 
Distribution of car d′ and bird d′ values. (a) Experiment 1 ( N = 39). Scatterplot showing the correlation between car d′ (SD = 0.83) and bird d′ (SD = 0.24) in Experiment 1: r = 0.18, p = n.s. (b) Experiment 2 ( N = 43). Scatterplot showing the correlation between car d′ (SD = 0.56) and bird d′ (SD = 0.31) in Experiment 2: r = 0.08, p = n.s.
Figure 2
 
Distribution of car d′ and bird d′ values. (a) Experiment 1 ( N = 39). Scatterplot showing the correlation between car d′ (SD = 0.83) and bird d′ (SD = 0.24) in Experiment 1: r = 0.18, p = n.s. (b) Experiment 2 ( N = 43). Scatterplot showing the correlation between car d′ (SD = 0.56) and bird d′ (SD = 0.31) in Experiment 2: r = 0.08, p = n.s.
Figure 3
 
Experiment 1 results ( N = 39). (a) Mean accuracy values for the same–different matching of identical and complementary faces and cars. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Identical trials − accuracy on Complementary trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 (n = 8 out of 39). The linear regression is calculated considering the remaining participants ( n = 31) and shows a significant positive correlation ( r = 0.42, p < 0.05).
Figure 3
 
Experiment 1 results ( N = 39). (a) Mean accuracy values for the same–different matching of identical and complementary faces and cars. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Identical trials − accuracy on Complementary trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 (n = 8 out of 39). The linear regression is calculated considering the remaining participants ( n = 31) and shows a significant positive correlation ( r = 0.42, p < 0.05).
Figure 4
 
Experiment 2 results ( N = 43). (a) Mean d′ values for the same–different matching of identical and complementary faces and cars in their upright and inverted orientations. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Complementary trials subtracted from accuracy on Identical trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 ( n = 10 out of 43). The linear regression calculated for the remaining participants ( n = 33) shows a significant positive correlation ( r = 0.35, p < 0.05).
Figure 4
 
Experiment 2 results ( N = 43). (a) Mean d′ values for the same–different matching of identical and complementary faces and cars in their upright and inverted orientations. Error bars represent the standard error of the mean. (b) Correlation plot showing the relationship between the Complementation Effect (accuracy on Complementary trials subtracted from accuracy on Identical trials) in the upright car condition and the Car Expertise Index (Car d′ − Bird d′). Gray squares represent the subset of the population with Bird d′ scores greater than 1 ( n = 10 out of 43). The linear regression calculated for the remaining participants ( n = 33) shows a significant positive correlation ( r = 0.35, p < 0.05).
Table 1
 
Correlation, r, between the complementation effect (performance on identical trials − performance on complementary trials) and an independent measure of car sensitivity (Car d′ or Delta d′). For each condition of both experiments, correlations are given for a subpopulation of participants, as well as the entire population.
Table 1
 
Correlation, r, between the complementation effect (performance on identical trials − performance on complementary trials) and an independent measure of car sensitivity (Car d′ or Delta d′). For each condition of both experiments, correlations are given for a subpopulation of participants, as well as the entire population.
Bird d′ < 1 All participants
Car d Car − Bird Delta d Car d Car − Bird Delta d
Experiment 1
  Cars upright 0.41* 0.42* 0.35* 0.32 #
  Faces upright 0.26 0.26 0.21 0.20
Experiment 2
  Cars upright 0.36* 0.35* 0.19 0.14
  Cars inverted 0.20 0.21 0.03 −0.05
  Faces upright −0.06 0.04 −0.14 −0.09
  Faces inverted −0.15 −0.12 −0.11 −0.18
 

# p = 0.05, * p < 0.05.

Table 2
 
Correlation, r, between the normalized complementation effect [(Identical − Complementary) / (Identical + Complementary)] and an independent measure of car sensitivity: Car d′ or normalized car expertise [(Car d′ − Bird d′) / (Car d′ + Bird d′)]. For each condition of both experiments, correlations are given for a subpopulation of participants, as well as for the entire population.
Table 2
 
Correlation, r, between the normalized complementation effect [(Identical − Complementary) / (Identical + Complementary)] and an independent measure of car sensitivity: Car d′ or normalized car expertise [(Car d′ − Bird d′) / (Car d′ + Bird d′)]. For each condition of both experiments, correlations are given for a subpopulation of participants, as well as for the entire population.
Bird d′ < 1 All participants
Car d ( C − B ) ( C + B ) Car d ( C − B ) ( C + B )
Experiment 1: Normalized complementation effect: (IC) / (I + C)
Cars upright 0.34 ( p = 0.06) 0.37* 0.22 0.18
Faces upright 0.14 0.18 0.06 0.06
Experiment 2: Normalized, adjusted complementation effect: [(I + 1)(C + 1)] / [(I + 1) + (C + 1)]
Cars upright 0.35* 0.28 0.18 0.10
Cars inverted 0.09 0.04 −0.06 −0.19
Faces upright −0.04 0.18 −0.11 0.06
Faces inverted −0.17 −0.18 −0.12 −0.17
 

* p < 0.05.

WilliamsGauthier_Supplement
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×