Free
Article  |   August 2014
The role of color in expert object recognition
Author Affiliations
Journal of Vision August 2014, Vol.14, 9. doi:https://doi.org/10.1167/14.9.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Simen Hagen, Quoc C. Vuong, Lisa S. Scott, Tim Curran, James W. Tanaka; The role of color in expert object recognition. Journal of Vision 2014;14(9):9. https://doi.org/10.1167/14.9.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  In the current study, we examined how color knowledge in a domain of expertise influences the accuracy and speed of object recognition. In Experiment 1, expert bird-watchers and novice participants categorized common birds (e.g., robin, sparrow, cardinal) at the family level of abstraction. The bird images were shown in their natural congruent color, nonnatural incongruent color, and gray scale. The main finding was that color affected the performance of bird experts and bird novices, albeit in different ways. Although both experts and novices relied on color to recognize birds at the family level, analysis of the response time distribution revealed that color facilitated expert performance in the fastest and slowest trials whereas color only helped the novices in the slower trials. In Experiment 2, expert bird-watchers were asked to categorize congruent color, incongruent color, and gray scale images of birds at the more subordinate, species level (e.g., Nashville warbler, Wilson's warbler). The performance of experts was better with congruent color images than with incongruent color and gray scale images. As in Experiment 1, analysis of the response time distribution showed that the color effect was present in the fastest trials and was sustained through the slowest trials. Collectively, the findings show that experts have ready access to color knowledge that facilitates their fast and accurate identification at the family and species level of recognition.

Introduction
Human object recognition is the end product of a set of visual processes that first organize the visual input into an intact percept before interpreting its meaning. Early specialized neural circuitry is devoted to extract and separate visual primitives, such as motion, depth, luminance, and color (Hubel & Wiesel, 1959, 1977; M. S. Livingstone & Hubel, 1987; M. Livingstone & Hubel, 1988; Schiller, Finlay, & Volman, 1976). However, the extent to which these processes contribute to the later stages of object recognition in which the input percept is matched with an object memory is still up for debate. Although traditional theories of object recognition emphasize the importance of shape and de-emphasize the role of color as a useful cue in this matching process (e.g., Biederman & Ju, 1988), more recent evidence suggests that color can be a useful cue under certain conditions (see Bramão, Reis, Petersson, & Faísca, 2011, for a review). However, the extent to which the effect of color on object recognition is a product of experience with a specific object domain has not yet been studied. 
Extensive experience with an object domain is associated with a shift in recognition strategy by which color information potentially becomes accentuated (Gauthier & Tarr, 1997; Johnson & Mervis, 1997; J. W. Tanaka & Taylor, 1991). The point at which an object percept initially indexes an object memory (i.e., the entry point of recognition) is typically at the basic category level (e.g., dog, bird, or car) (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). This is the level at which the structural properties (i.e., global shape) of an object category minimizes the differences of its members (e.g., all dogs) while maximizing differences across object categories (e.g., dogs vs. birds vs. cars). Thus, the diagnostic shape properties of categories at the basic level drive the entry point of recognition. However, individuals with an expertise at visually discriminating objects of a certain domain (i.e., object experts) show a downward shift of recognition from the basic level to the more specific, subordinate category level (e.g., Ford Focus, Labrador retriever, or sparrow) (J. W. Tanaka & Taylor, 1991). At this level, the shapes of different object categories (e.g., sparrows, warblers, finches) overlap to a larger degree and are therefore less optimized at indexing a certain category. Provided that shape information is less diagnostic for exemplars of a category, it has been speculated that subordinate recognition may rely to a larger degree on other cues, such as color information (Jolicoeur, Gluck, and Kosslyn, 1984). For example, bird-watchers—whose objective is to make quick and accurate identifications of visually homogenous (i.e., subtle differences in global and internal shapes) objects at a species-specific level (e.g., Nashville warbler, American tree sparrow)—are reported to be more likely to list surface information (e.g., color) as a diagnostic cue for recognition, relative to bird novices (J. W. Tanaka & Taylor, 1991). Thus, the process of obtaining object expertise (i.e., forcing a downward shift of recognition from the basic to the subordinate level) may act as a catalyst for coding color-rich expert object representations. 
In this paper, we examine the role that color information has in object recognition and whether it can be modulated by experience. We chose bird-watching as a domain of investigation for several reasons. First, bird-watching requires quick and accurate recognition of visually homogenous objects (in terms of their global shape) at subordinate species levels (e.g., Nashville warbler) at which surface details (e.g., color) might play a critical role in helping to make within-category identifications. Second, birds carry diagnostic color information that can be used to aid recognition. Third, experienced bird-watchers readily report color information in feature listing tasks (J. W. Tanaka & Taylor, 1991). Based on these qualities, experienced bird-watchers form a good population for examining the role of experience in modulating color effects on object recognition. 
The role of color in object recognition
A distinction is often made between early and late stages of visual processes. For our purposes, the early processes are those associated with the production of an intact percept through edge detection, texture segmentation, and figure–ground segregation (i.e., grouping elements of a component object together while separating those from elements belonging to other component objects or to the background) (Marr, 1982). These early processes can be facilitated by color information (e.g., Cavanagh, 1987; Gegenfurtner & Rieger, 2000). For instance, Gegenfurtner and Rieger (2000) showed that participants were better at encoding rapidly presented colored images of natural scenes in comparison to gray scale images. The authors suggested that color information provides an additional perceptual cue upon which the form and structure of the scene can be defined. Similarly, color information could potentially help observers decompose objects into parts. Unlike the later stages of recognition, processes in the early stages of visual recognition should not be affected by the extent to which color appropriately matches the real-world object because the percept has not yet been matched with representations in memory. Thus, early visual processes should benefit from congruent and incongruent color given that these processes occur at stages before later representations of object color knowledge has been accessed. 
In contrast to early processes, later processes involve the recognition of the object by matching the percept with representations stored in long-term memory. Whether or not color information contributes to this matching process has been controversial. On the one hand, edge-based theories of object recognition propose that object representations are stored in memory by simple shape and edge information and can therefore not be indexed by surface information. One example is Biederman's (1987) recognition-by-components model, which postulates that objects are represented by simple, geometrical shapes (e.g., cylinders, bricks, wedges, cones, circles, rectangles) named geons. The initial findings indicated that color effects on object recognition were only observed in tasks in which name retrieval was necessary (Biederman & Ju, 1988; Davidoff & Ostergaard, 1988; Ostergaard & Davidoff, 1985). Based on this evidence, Biederman and Ju (1988) theorized that color information did not facilitate the initial point of recognition but had an effect at a later, postrecognition stage related to verbal knowledge and name retrieval. 
In contrast to edge-based theories, shape-plus-surface theories propose that color information can facilitate the initial recognition of objects (Bramão, Faísca, Forkstam, Reis, & Petersson, 2010; Joseph, 1997; Joseph & Proffitt, 1996; Lewis, Pearson, & Khuu, 2013; Nagai & Yokosawa, 2003; Naor-Raz, Tarr & Kersten, 2003; Price & Humphreys, 1989; Rossion & Pourtois, 2004; J. W. Tanaka & Presnell, 1999; J. Tanaka, Weiskopf, & Williams, 2001). J. W. Tanaka and Presnell (1999) reported that color could indeed facilitate the recognition of some objects. Similar to Biederman and Ju's (1988) work, they classified objects as either not associated (low-color diagnosticity), or associated with a specific color (high-color diagnosticity). However, unlike Biederman and Ju, J. W. Tanaka and Presnell used a more controlled approach to determine objects' color diagnosticity (i.e., used normative data as opposed to a panel of three judges), which led them to categorize some of Biederman and Ju's high-color diagnostic objects (e.g., fork) as low in color diagnosticity. J. W. Tanaka and Presnell demonstrated that participants were faster to identify congruently colored versions of high-color diagnostic objects than achromatic versions and incongruent color versions. In contrast, participants were no faster to identify color versions of low-color diagnostic objects than achromatic and incongruent color versions (see Nagai & Yokosawa, 2003, for a replication). Systematically degrading shape information by image blurring impaired the recognition of high-color diagnostic objects less than low-color diagnostic objects, showing that both shape and color cues can aid the recognition of color-diagnostic objects. Thus, although color plays a role in low-level and high-level vision, only the latter is sensitive to color congruency (i.e., correct color of the object). 
In the real world, the color diagnosticity is correlated with category membership. Whereas color is frequently diagnostic for objects from natural categories (e.g., fruits, vegetables), it is less so for human-made objects (e.g., cars, furniture) (Price & Humphreys, 1989; Wurm, Legge, Isenberg, & Luebker, 1993). However, Nagai and Yokosawa (2003) found that, regardless of object category (natural vs. human made), participants showed a color effect for high-color diagnostic objects but not for low-color diagnostic objects. The importance of color diagnosticity is supported by a meta-analysis examining the influence of various moderator variables (e.g., color diagnosticity, experimental task, object category) on color effects in object recognition (Bramão et al., 2011). Thus, color diagnosticity appears to be an important moderator for the role of color in object recognition. 
In these experiments, we will test the interaction between color diagnosticity and expertise. We were interested in whether color knowledge as a result of extensive perceptual experience influences the recognition of objects in the domain of expertise. To test this question, bird experts and novices were asked to recognize familiar birds shown in their congruent color, an incongruent color, or gray scale at either the subordinate family level (e.g., hummingbird, woodpecker, sparrow; Experiment 1) or at the species level (e.g., Tennessee warbler, Wilson's warbler; Experiment 2). We hypothesized that, as a result of extensive experience with discriminating species of birds, the experts will be more affected by color congruency than novices. Moreover, if access to color information is automatic, the experts should demonstrate a color advantage at even their fastest response times. Alternatively, if color only plays a low-level role in segmenting the internal details of the object, we predict that both experts and novices will show an advantage for congruently and incongruently colored birds relative to gray scale versions. 
Experiment 1
In Experiment 1, the effects of color on subordinate family-level categorization of birds (e.g., robin, sparrow, cardinal) were assessed with bird experts and bird novices. The two groups were tested in a category verification task in which the task was to make YES/NO judgments about the correspondence between a category label and a subsequently presented object image. For example, if the label “Cardinal” preceded the image of a cardinal, the correct answer was YES (i.e., the label and the image corresponded). In contrast, if the label “Robin” preceded the image of a cardinal, the correct answer was NO (i.e., the label and the image did not correspond). 
We expected that bird experts would be faster and more accurate when categorizing the birds relative to the novices. Moreover, as a result of extensive experience and color knowledge of birds, we predicted that the bird experts would recognize congruently colored birds faster than gray scale and incongruently colored birds. 
Methods
Participants
Fifteen expert participants, ranging from 23 to 62 years of age (five female, M = 38.13, SD = 14.78), were selected based on nominations from their bird-watching peers. Fifteen participants were selected to serve as the novice control participants who were matched for age, 25–66 years of age (six female, M = 37.27, SD = 14.76), and education with the expert participants. The novice participants had no prior experience in bird-watching. The data from one additional expert participant was lost due to technical issues. Moreover, three additional novice participants were dropped from the study due to their insufficient knowledge of common bird species. Participants received monetary compensation for their participation. 
To assess the level of bird expertise in our participants, we used the Blackstone Expertise Test (a bird expertise test), a brief sequential matching task with images of birds. A local bird-watcher helped select birds common to the region that ranged from easy to more difficult to recognize. The test consisted of 48 trials. The experts obtained a higher d' score (M = 1.96) relative to the novices (M = 0.71, t = 6.84, p < 0.001) on this test. 
Stimuli
Three exemplars from each of eight common bird (total of 24 images) species (cardinal, oriole, hummingbird, robin, sparrow, swallow, woodpecker, wren) were collected in part from the Internet and from an existing bird data set (Wahlheim, Teune, & Jacoby, 2011). The birds selected were among the 20 most frequently mentioned birds in a category norms study by Battig and Montague (1969). 
Using customized Matlab code, the images were transformed to create an incongruent-color condition and a gray scale condition, using the L*a*b color space. This color space has been used in previous studies investigating color effects on scene recognition (Oliva & Schyns, 2000). The L*a*b color space separates the luminance on its own axis (L*) and chroma on the two remaining axes (a*b*). The a* axis extends from red to green, and the b* dimension extends from blue to yellow. Thus, color can be transformed while leaving luminance values relatively intact. Moreover, this color space reflects the structure of the color and luminance pathways at the retinogeniculate stage. The color-incongruent condition was created by either flipping the color axis (e.g., red to green or blue to yellow or vice versa), by swapping the two color axes (e.g., blue to red), or by both swapping and flipping the color axes. The decision of which transformation to use depended on which transformation created the subjectively best incongruent condition. Figure 1 illustrates the stimuli and the transformations used in this experiment. 
Figure 1
 
Examples of the stimuli used in Experiment 1. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 1
 
Examples of the stimuli used in Experiment 1. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
The color transformation chosen for a specific bird (e.g., cardinal) would be the same for each of the exemplars of that bird (e.g., cardinal 01, cardinal 02, cardinal 03). This was done to keep the color statistics of the incongruent condition the same as that of the congruent condition (e.g., the cardinal would be presented an equal amount of times in red—the congruent condition—and in, e.g., green—the incongruent condition). However, the type of transformation (e.g., swap vs. inversion) varied across the different bird families (e.g., robin vs. cardinal). The benefit of varying the kind of color transformations (e.g., flip vs. swap) is to prevent the participants from learning the mapping of the original color and its color transformation. Images were cropped and scaled to fit within a frame of 250 × 250 pixels and pasted on a gray background using Adobe Photoshop CS4. Images subtended a visual angle of approximately 6.81° vertically and 6.57° horizontally. 
Procedure
Participants were tested in a category verification task. At the beginning of the trial, a ready prompt (i.e., “Get Ready”) was displayed for 1.0 s before it was replaced by a category label (e.g., “Robin”). After 2.5 s, the category label was replaced by an image of a bird that remained on the screen until the participant made a YES/NO judgment. If the label and the image corresponded (e.g., the label “Robin” was followed by an image of a robin), the participant was instructed to press the button on a keyboard labeled YES (“m” on the keyboard). If the label and the image did not correspond (e.g., the label “Robin” was followed by an image of a cardinal), the participant was instructed to press the button labeled NO (“c” on the keyboard). Before the task started, the participants were told which birds they would see in the experiment and instructed to respond as quickly and as accurately as possible. Crucially, they were told that the birds would be presented in either congruent color, incongruent color, or in gray scale. Thus, they were told to disregard color and solve the task by using other kinds of information (e.g., external and internal shape information). 
The foils (e.g., the label “Robin” followed by the image of an oriole) were based on the names of the bird species in the experiment. Thus, the only labels that could appear in the experiment were the following: “Cardinal,” “Oriole,” “Wren,” “Robin,” “Hummingbird,” “Woodpecker,” “Swallow,” and “Sparrow.” In a given block, every bird was used as a foil exactly three times, and each foil was used approximately twice for each bird (e.g., “Robin” was paired with the image of a sparrow twice throughout the experiment). Each bird was used as a foil and a correct label an equal amount of times. 
Each bird exemplar (e.g., Cardinal 01) was displayed once in a matching trial and once in a nonmatching trial in each of the three color conditions (congruent, incongruent, gray scale). Thus, each bird exemplar was presented three times in YES trials and three times in NO trials. Three blocks were created to prevent the same bird exemplar from being presented in different color conditions close in time. Each block consisted of 48 trials (eight bird families, three exemplars, two types of trial) for a total of 144 trials. The order of the blocks was counterbalanced across participants. 
Results
Accuracy
Trials with response time three standard deviations above the overall mean were excluded from any of the following analysis. In addition, we excluded participant data for any bird family that was miscategorized on 50% (or more) in the congruent color condition. In total, six bird families were excluded across five novice participants (two wren, two oriole, one sparrow, one swallow) that amounted to 5% of the total trials for the novices. 
The accuracy data for experts and novices were analyzed in a mixed-design analysis of variance (ANOVA) using color (congruent, gray scale, incongruent) and trial type (YES, NO) as within-subjects factors and group (novices, experts) as a between-subjects factor. The significant main effect of trial type, F(1, 28) = 13.57, p = 0.001, partial eta2 = 0.33, demonstrated that NO trials (M = 96%, SE = 0.6%) were more accurate than YES trials (M = 92%, SE = 0.9%). The significant main effect of color, F(2, 56) = 8.18, p = 0.001, partial eta2 = 0.23, demonstrated that the color manipulations had a differential influence on the accuracy rates. The significant main effect of group, F(1, 28) = 97.09, p < 0.001, partial eta2 = 0.78, demonstrated that the experts were more accurate than the novices. 
Color interacted with group, F(2, 28) = 4.39, p = 0.017, partial eta2 = 0.14, indicating that the color manipulations had a differential impact on expert and novice performance. Trial type interacted with group, F(1, 28) = 13.12, p = 0.001, partial eta2 = 0.32, showing that while the experts were equally accurate in the YES trials (M = 99%, SE = 1%) and the NO trials (M = 99%, SE = 0.8%, p = 0.964), the novices were more accurate in the NO trials (M = 93%, SE = 0.8%) than in the YES trials (M = 85%, SE = 1%, p < 0.001). However, the two-way interaction between trial type and color was not significant, F(2, 56) = 0.28, p = 0.754. Similarly, the three-way interaction between trial type, color, and group was not significant, F(2, 56) = 1.40, p = 0.255. Thus, the color manipulations did not differentially influence YES and NO trials. 
To analyze the group by color interaction, we carried out separate ANOVAs for the novice and expert groups with color (congruent, gray scale, incongruent) as a within-subjects factor. For the novices, the main effect of color, F(2, 28) = 6.82, p = 0.004, partial eta2 = 0.33, demonstrated that color influenced the recognition of the birds. The novices were more accurate at categorizing the birds shown in congruent color (M = 92%, SE = 1%) relative to birds shown in gray scale (M = 86%, SE = 1%, p = 0.003) and incongruent color (M = 88%, SE = 2%, p = 0.031) (Table 1). For the bird experts, the main effects of color, F(2, 28) = 1.54, p = 0.231, was not significant (congruent: M = 99%, SE = 0.2%; gray scale: M = 99%, SE = 0.5%; incongruent: M = 99%, SE = 0.4%) (Table 1). 
Table 1
 
Response time and accuracy in Experiment 1 for each group (expert, novice) and color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Table 1
 
Response time and accuracy in Experiment 1 for each group (expert, novice) and color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Experts Novices
Condition Percentage correct Response time (ms) Percentage correct Response time (ms)
Congruent 99.6 (0.2) 819 (76) 91.7 (1.3) 1060 (61)
Gray scale 98.7 (0.5) 878 (83) 86.2 (1.3) 1051 (52)
Incongruent 99.0 (0.4) 858 (79) 88.0 (1.5) 1092 (65)
Response time
The response time data for the correct trials for experts and novices were analyzed in a mixed-design ANOVA using color (congruent, gray scale, incongruent) and trial type (YES, NO) as within-subjects factors and group (novices, experts) as a between-subjects factor. The significant main effect of color, F(2, 56) = 3.85, p = 0.027, partial eta2 = 0.12, demonstrated that the color manipulations had a differential influence on the response time. The significant main effect of group, F(1, 28) = 4.81, p = 0.037, partial eta2 = 0.15, indicated that the experts were faster than the novices. The main effect of trial type was not significant, F(1, 28) = 0.30, p = 0.591. 
Color interacted with group, F(2, 28) = 3.81, p = 0.028, partial eta2 = 0.12, indicating that the color manipulations had a differential impact on expert and novice performance. Trial type did not interact with color, F(2, 56) = 1.07, p = 0.351, or with group, F(1, 28) = 3.10, p = 0.089. Similarly, the three-way interaction between trial type, color, and group was not significant, F(2, 56) = 0.35, p = 0.704. Thus, the color manipulations did not differentially influence YES and NO trials. 
To analyze the group by color interaction, we carried out separate ANOVAs for the novice and expert groups with color (congruent, gray scale, incongruent) as a within-subjects factor. For the novices, the main effect of color, F(2, 28) = 1.58, p = 0.224, was not significant (congruent: M = 1060 ms, SE = 61 ms; gray scale: M = 1051 ms, SE = 52 ms; incongruent: M = 1092 ms, SE = 65 ms) (Table 1). In contrast, for the bird experts, the main effect of color was significant, F(2, 28) = 17.59, p < 0.001, partial eta2 = 0.56, demonstrating that color influenced the recognition of the birds. The experts were faster at categorizing the birds shown in congruent color (M = 819 ms, SE = 76 ms) relative to birds shown in gray scale (M = 878 ms, SE = 83 ms, p < 0.001) and incongruent color (M = 858 ms, SE = 79 ms, p = 0.001) (Table 1). 
Inverse efficiency score
Inverse efficiency scores (IESs) were analyzed using a mixed-design ANOVA. The IES is computed by dividing correct response time by proportion correct within each condition for each participant; a lower score means better performance. This measure is commonly used in situations of speed–accuracy trade-off or when some participants show an effect in accuracy and other participants show the effect in response time (Akhtar & Enns, 1989; Christie & Klein, 1995; Goffaux, Hault, Michel, Vuong, & Rossion, 2005; Jacques & Rossion, 2007; Kennett, Eimer, Spence, & Driver, 2001; Kuefner, Cassia, Vescovo, & Picozzi, 2010; Townsend & Ashby, 1983). 
Collapsing over trial type, the IESs for both groups were analyzed in a mixed-design ANOVA using color (congruent, gray scale, incongruent) as a within-subjects factor and group (novices, experts) as a between-subjects factor. The main effect of group was significant, F(1, 28) = 10.85, p = 0.003, partial eta2 = 0.28. The main effect for color was also significant, F(2, 56) = 12.92, p < 0.001, partial eta2 = 0.32. However, color did not interact with group, F(2, 28) = 1.48, p = 0.236, showing that color manipulations had an equal influence on expert and novice performance (Figure 2). 
Figure 2
 
Experiment 1: IESs for each group (expert, novice) as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 2
 
Experiment 1: IESs for each group (expert, novice) as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Response time distribution analysis
To examine the distribution of IESs as a function of response time, the trials of each participant (collapsed across trial type) were ranked from the fastest to the slowest, irrespective of accuracy (i.e., both correct and incorrect trials), within each color condition before being grouped into four bins containing the fastest 25% of the responses (i.e., quartile bin 1), the next 25% of responses (i.e., quartile bin 2), and so on. Within each bin, average correct response time as well as the proportion correct for each condition for each participant was calculated. IES for each participant was computed by dividing the average correct response time by the proportion correct. For example, the IES for the congruent condition in the 25% fastest trials was based on the correct response time and proportion correct associated with the congruent condition in the 25% fastest trials. Thus, this approach allowed us to independently analyze the impact of color on performance in trials in which response time was fast (e.g., 25% fastest trials) and slow (e.g., 25% slowest trials). This procedure was done separately for the experts and the novices. 
The data were first analyzed in a mixed-design ANOVA using color (congruent, gray scale, incongruent) and bin (1, 2, 3, 4) as within-subjects factors and group (novices, experts) as a between-subjects factor. The main effects of bin, F(3, 84) = 71.08, p < 0.001, partial eta2 = 0.72; color, F(2, 56) = 15.32, p < 0.001, partial eta2 = 0.34; and group, F(1, 28) = 13.74, p = 0.001, partial eta2 = 0.33, were significant. The two-way interactions between bin and color, F(6, 168) = 3.34, p = 0.004, partial eta2 = 0.11, and between bin and group, F(3, 28) = 20.57, p < 0.001, partial eta2 = 0.42, were significant. Crucially, the three-way interaction between bin, color, and group was significant, F(6, 28) = 2.92, p = 0.01, partial eta2 = 0.1. 
Next, the groups were independently analyzed in a repeated-measures ANOVA using color (congruent, gray scale, incongruent) and bin (1, 2, 3, 4) as within-subjects factors. For the novices, the main effects of color, F(2, 28) = 8.80, p = 0.001, partial eta2 = 0.39, and bin, F(3, 42) = 44.55, p < 0.001, partial eta2 = 0.76, were significant. The two-way interaction between color and bin, F(6, 84) = 3.20, p = 0.007, partial eta2 = 0.19, was significant. In bin 3, congruently colored images (M = 1207 ms, SE = 67 ms) were recognized better than gray scale images (M = 1320 ms, SE = 82 ms, p = 0.03) and incongruently colored images (M = 1416 ms, SE = 120 ms, p = 0.007). In bin 4, although the comparison between congruently colored images (M = 2110 ms, SE = 194 ms) and gray scale images (M = 2606 ms, SE = 323 ms) were significant (p = 0.028), the comparison between congruently colored images and incongruently colored images (M = 2372 ms, SE = 284 ms) were not significant (p = 0.082). 
For the bird experts, the main effects of color, F(2, 28) = 19.10, p < 0.001, partial eta2 = 0.58, and bin, F(3, 42) = 66.17, p < 0.001, partial eta2 = 0.83, were significant. The experts were better at categorizing the birds shown in congruent color (M = 822 ms, SE = 77 ms) relative to birds shown in gray scale (M = 889 ms, SE = 84 ms, p < 0.001) and incongruent color (M = 866 ms, SE = 79 ms, p = 0.001) (Figure 2). The interaction between color and bin was not significant, F(6, 84) = 0.54, p = 0.777, suggesting that color affected categorization performance in all bins (i.e., fast and slow trials). This finding contrasts with the novices, for whom color affected performance predominantly for slow trials (Figure 3). 
Figure 3
 
Experiment 1: Distribution of IESs for the experts and novices. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 3
 
Experiment 1: Distribution of IESs for the experts and novices. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
To summarize, the main finding of Experiment 1 was that both bird experts and bird novices benefitted from congruently colored birds but not incongruently colored birds. These results implicate the use of color for purposes of high-level object recognition but not for low-level feature segmentation. Although both novices and experts benefitted from congruently colored birds, its presence affected the performance in different ways. Based on the IES distribution analysis, the novices applied their knowledge of color primarily in slower trials as evidenced by the advantage for congruent color relative to gray scale (i.e., bins 3 and 4) and incongruent conditions (i.e., bin 3). In contrast, experts demonstrated an advantage for congruent color in the fastest quartile of trials, and the color advantage was maintained in the second, third, and fourth quartiles. Thus, whereas the experts apply their color knowledge quickly and automatically as evidenced in the first quartile of responses, novices apply color knowledge more slowly and deliberately as shown in the later quartiles. 
Experiment 2
In Experiment 1, bird experts and novices were asked to categorize common birds at the subordinate, taxonomic family level (e.g., cardinal) of the bird. However, the true measure of bird expertise is recognition of birds at the more specific species level of categorization. Moreover, birds at the species level share, to a larger degree, object shape relative to family level birds, potentially increasing the role that internal details (e.g., color) might play in recognition. In Experiment 2, the experts were tested for color effects at the specific species level of American tree sparrow, Nashville warbler, and house finch. Similar to Experiment 1, the participants were tested in a category verification task in which they were required to make YES/NO judgments about the correspondence between a category label and a subsequently presented object image. 
Similar to the predictions in Experiment 1, if the elicited object representation contains color information, a bird with congruent color should be a better match with the representation than a bird with incongruent colors or gray scale. In contrast, images presented in gray scale or with congruent or incongruent colors should not affect performance if the representation does not contain color information. Moreover, if the color removal disrupts segregation of internal part features, gray scale objects should suffer more relative to congruent and incongruent colored objects. Similar to Experiment 1, we expected that the experts' recognition would be impaired with birds presented in gray scale and incongruent colors relative to congruent colors. We once again applied a response time distribution analysis to investigate whether the knowledge of color information is automatically applied in the experts' recognition of birds at the species level. 
Methods
Participants
Fifteen expert bird-watchers, 23–62 years of age (M = 38.33, SD = 14.94), took part in Experiment 2. The participants received monetary compensation for their participation. With the exception of one bird expert, the experts who participated in Experiment 1 participated in Experiment 2. Fourteen trials for one expert participant were lost due to technical issues (0.29% of the total amount of trials). 
Stimuli
The stimuli were selected from the sparrow (e.g., chipping sparrow, field sparrow, song sparrow), warbler (e.g., Wilson's warbler, Canada warbler, Nashville warbler), and finch (e.g., house finch, pine siskin, Cassin's finch) bird families. Six species from each family were selected with three exemplars of each species. Thus, a total of 54 bird images were used in Experiment 2 (three families × six species × three exemplars). The stimuli were collected from the Wahlheim et al. (2011) bird data set and supplemented by images from the Internet that were independently verified by a bird expert. 
Following the procedures used in Experiment 1, the bird images were transformed to create color-incongruent and gray scale conditions in addition to the color-congruent condition (Figure 4). Images were cropped and scaled to fit within a frame of 250 × 250 pixels and pasted on a gray background using Adobe Photoshop CS4. Images subtended a visual angle of approximately 6.81° vertically and 6.57° horizontally. 
Figure 4
 
Examples of the stimuli used in Experiment 2. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 4
 
Examples of the stimuli used in Experiment 2. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Procedure
The experimental procedure was identical to Experiment 1. The six species of birds from the sparrow, warbler, and finch families were tested in congruent color, incongruent color, and gray scale. In Experiment 2, each experimental trial was repeated three times for a total of 324 experimental trials (three families × six species × three exemplars × two types of trial × three repetitions). The trials were divided into three blocks of 108 trials, and participants were provided with a rest break between blocks. For YES trials, the species label (e.g., “Nashville Warbler,” “Wilson's Warbler”) matched the subsequently presented picture. For the NO trials in which the species label did not match the picture, the foil picture was selected from the same family as the species label (e.g., the label “Wilson's Warbler” was followed by a picture of a Nashville warbler). 
Results
Accuracy
Trials with response time three standard deviations (SD) above the overall mean were excluded from all of the following analysis. No trials were deleted due to low accuracy with a given bird family (i.e., less than 50% accuracy). The accuracy data were analyzed in a repeated-measures ANOVA using color (congruent, gray scale, incongruent) and trial type (YES, NO) as within-subjects factors. The significant main effect of trial type, F(1, 14) = 5.47, p = 0.035, partial eta2 = 0.28, indicated that NO trials (M = 95%, SE = 1%) were more accurate than YES trials (M = 92%, SE = 2%). The main effect of color (congruent: M = 95%, SE = 1%; gray scale: M = 93%, SE = 1%; incongruent: M = 93%, SE = 2%) was not significant, F(2, 28) = 2.78, p = 0.079 (Table 2). Similarly, color did not interact with trial type, F(2, 28) = 2.56, p = 0.096. 
Table 2
 
Response time and accuracy in Experiment 2 for each color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Table 2
 
Response time and accuracy in Experiment 2 for each color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Experts
Condition Percentage correct Response time (ms)
Congruent 95.2 (1.2) 1351 (204)
Gray scale 93.2 (1.4) 1481 (212)
Incongruent 93.4 (1.6) 1466 (204)
Response time
The response time data for the correct trials were analyzed in a repeated-measures ANOVA using color (congruent, gray scale, incongruent) and trial type (YES, NO) as within-subjects factors. The main effect of color, F(2, 28) = 15.48, p < 0.001, partial eta2 = 0.53, was significant. The congruently colored images (M = 1351 ms, SE = 204 ms) were faster than the gray scale images (M = 1481 ms, SE = 212 ms, p < 0.001) and incongruently colored images (M = 1466 ms, SE = 204 ms, p = 0.003) (Table 2). The main effect of trial type, F(1, 14) = 4.13, p = 0.062, was not significant. 
The two-way interaction between trial type and color was significant, F(2, 28) = 3.54, p = 0.043, partial eta2 = 0.20. In the YES trials, the congruently colored images (M = 1411 ms, SE = 235 ms) were identified faster than gray scale images (M = 1611 ms, SE = 257 ms, p < 0.001) but not incongruently colored images (M = 1512 ms, SE = 218 ms, p = 0.083). In the NO trials, the congruently colored images (M = 1291 ms, SE = 177 ms) were identified faster than the gray scale images (M = 1351 ms, SE = 171 ms, p = 0.041) and incongruently colored images (M = 1420 ms, SE = 196 ms, p = 0.002). 
Inverse efficiency scores
Collapsing over trial type, the IESs were analyzed in a repeated-measures ANOVA using color (congruent, gray scale, incongruent) as a within-subjects factor. The main effect of color, F(2, 28) = 10.17, p < 0.001, partial eta2 = 0.42, was significant (Figure 5). The experts were better at categorizing birds shown in congruent color (M = 1430 ms, SE = 221 ms) relative to birds shown in gray scale (M = 1611 ms, SE = 243 ms, p < 0.001) and birds shown in incongruent color (M = 1594 ms, SE = 233 ms, p = 0.007). 
Figure 5
 
Experiment 2: IESs for the experts as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 5
 
Experiment 2: IESs for the experts as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Response time distribution analysis
Similar to Experiment 1, to examine the distribution of IESs as a function of response time, the IES data were collapsed over trial type and analyzed in a repeated-measures ANOVA using color (congruent, gray scale, incongruent) and bin (1, 2, 3, 4) as within-subjects factors. The main effects of color, F(2, 28) = 4.97, p = 0.014, partial eta2 = 0.26, and bin, F(3, 42) = 22.96, p < 0.001, partial eta2 = 0.62, were significant. The interaction between color and bin was significant, F(6, 84) = 2.37, p = 0.036, partial eta2 = 0.15. In bin 1, the congruent condition (M = 744 ms; SE = 80 ms) was different than the gray scale condition (M = 803 ms, SE = 93 ms, p = 0.007) and the incongruent condition (M = 791 ms, SE = 87 ms, p = 0.001). In bin 2, the congruent condition (M = 1008 ms, SE = 139 ms) was different than the gray scale condition (M = 1095 ms, SE = 148 ms, p < 0.001) and the incongruent condition (M = 1108 ms, SE = 148 ms, p < 0.001). In bin 3, the congruent condition (M = 1436 ms, SE = 257 ms) was different than the gray scale condition (M = 1757 ms, SE = 319 ms, p = 0.002) whereas it approached a significant difference in the incongruent condition (M = 1834 ms, SE = 376 ms, p = 0.055). In bin 4, the congruent condition (M = 3065 ms, SE = 616 ms) was different than the gray scale condition (M = 3976 ms, SE = 928 ms, p = 0.027) and the incongruent condition (M = 4023 ms, SE = 886 ms, p = 0.027) (Figure 6). No other comparisons were significant. 
Figure 6
 
Experiment 2: Distribution of IESs as a function of response time for the experts. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 6
 
Experiment 2: Distribution of IESs as a function of response time for the experts. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
The main finding of Experiment 2 was that color influenced the performance of bird experts when recognizing birds at the species-specific level. A color effect was found in the fastest trials in which recognition for congruently colored birds was better than its gray scale or incongruently colored version. This effect was also found in the slower trials. Thus, similar to the family-level birds of Experiment 1, the experts utilized the color information of birds at the species-specific level irrespective of whether they were quick or slow at responding. Thus, the experts seem to automatically incorporate the color information of the birds in their perceptual analysis. 
General discussion
The aim of the current study was to test the interactions of perceptual experience and color knowledge in object recognition. In Experiment 1, expert bird-watchers and bird novices performed subordinate family-level categorizations of congruent color, incongruent color, and gray scale images of common birds (e.g., cardinal). Consistent with previous work (J. W. Tanaka & Taylor, 1991), the bird experts were better at categorizing birds at the family level than the bird novices. However, the experts performed at ceiling in all color conditions (i.e., congruent color, gray scale, incongruent color), making it difficult to compare expert and novice performance based on accuracy and response time. 
To compare novice and expert performance, we computed IESs, which combine response time and accuracy (for other studies using IES, see Akhtar & Enns, 1989; Christie & Klein, 1995; Goffaux et al., 2005; Jacques & Rossion, 2007; Kennett et al., 2001; Kuefner et al., 2010; Townsend & Ashby, 1983). In Experiment 1, group analysis with IES of the experts and novices showed that recognition of both groups was affected by color. Analysis of the distribution of the IES in which trials were ranked from fastest to slowest showed that the experts recognized congruently colored birds better than gray scale and incongruently colored birds in the fastest trials (i.e., bin 1) whereas novices recognized congruently colored birds better than gray scale (i.e., bin 3 and 4) and incongruently (i.e., bin 3) colored birds in the slower trials. Thus, color had an immediate effect on expert recognition but a slower effect on novice recognition. The color advantage cannot be attributed to low-level segmentation of internal features because incongruent color images with good segmentation properties were recognized equally as fast as gray scale images that offered no color segmentation. Collectively, the findings from Experiment 1 suggest that color information contributes to both novice and expert recognition, albeit in different ways. Although color knowledge of the experts has an immediate impact on their fastest recognitions, color knowledge for the novices plays a larger role in their later responses. 
In Experiment 2, the experts performed subordinate species-level categorizations (e.g., Nashville warbler) of congruent color, incongruent color, and gray scale images of warblers, sparrows, and finches. Here, color was found to play a prominent role in the expert recognition. Although recognition accuracy was equivalent in the congruent, incongruent, and gray scale conditions, the experts were faster at recognizing congruently colored birds relative to their incongruently colored and gray scale versions. Similarly, in terms of IES, the performance of the experts was better with congruently colored images relative to incongruently colored and gray scale images. The distribution of IES showed that the color effects were present in both fast and slow trials. The main finding of Experiment 2 was that congruent color did improve the performance with which birds were categorized at the specific species level. 
The role of multicoded object representations in expert object recognition
Results from these experiments indicate that color facilitates recognition of objects in a specific category domain. Further, domain-specific experience can modulate the temporal dynamics of the influence that color has on recognition. To account for the difference in the time with which color influenced expert and novice recognition, we propose that domain-specific expertise with birds modulates the degree to which color representations are utilized in early recognition. 
In the fastest trials, the performance of the novices was unaffected when asked to match a percept of either a color-congruent, color-incongruent, or gray scale bird to its stored representation. In contrast, in slower trials, the performance declined in the incongruent and gray scale conditions in bin 3 and in the gray scale condition in bin 4. For experts, on the other hand, the performance was enhanced in the fastest trials when asked to match a percept with congruent color to its stored representation. Similarly, images with congruent color also facilitated performance in the slower trials. Thus, whereas the novices needed additional time to utilize color, the experts had immediate access to the color information, suggesting that the color representations were tightly coupled with the shape representations. 
Although much research has focused on the operational definition of perceptual object expertise as the fast and accurate recognition of domain-specific objects at the subordinate level of abstraction (e.g., Gauthier & Tarr, 1997; Johnson & Mervis, 1997; J. W. Tanaka & Curran, 2001; J. W. Tanaka & Taylor, 1991), little attention has been devoted to examining the underlying representations that mediate expert behavior. This study, however, takes a step toward mapping out the diagnostic features stored in object memories that support the expert behavior. Our results demonstrate that the expert behavior is supported in part by perceptual analysis, or routines, that readily extracts color from the object. This suggests that extensive experience encoding and retrieving object memories has resulted in object representations that, to a larger degree, incorporate color information. 
A defining quality of expert behavior is that it is guided by fast and effortless implicit procedures rather than slow and effortful explicit procedures (Johansen & Palmeri, 2002). Our findings suggest that analysis of color information has become more of an implicit procedure for the expert. Even though structural information was sufficient for accurate recognition and the experts were instructed to disregard color and focus on shape information, color, nevertheless, contributed to the recognition advantage. Thus, experts found it harder to inhibit color information due to a recognition strategy in which color encoding is an implicit and automatized process. This interpretation is supported by analysis of the distribution of IESs of Experiments 1 and 2 in which color had an immediate effect (i.e., quartile bin 1). However, one might have expected that the incongruent color should have produced an interference effect relative to the gray scale condition (as opposed to an interference effect relative to the congruent color condition) (e.g., Stroop, 1935). The fact that this effect was not observed could have two explanations. First, it seems likely that the gray scale condition is not a typical neutral condition but instead represents a form of an incongruent color transformation, in which case an interference effect cannot be measured. Second, it is possible that an interference effect has been attenuated due to color accentuating internal object features. In any case, we suggest that the expert behavior is partly supported by a perceptual strategy by which color information is automatically accessed, which, in turn, facilitates the recognition of color-congruent birds. 
Our study suggests that the content of the robust object memories depends on experience. However, the content of the object memory is also a function of the physical properties that provide diagnostic cues to the recognition of the members of the object domain. For instance, it is well documented that most people are face experts and that face expertise is supported by a more holistic processing strategy (e.g., J. W. Tanaka & Farah, 1993). Given that faces differ in the facial features (e.g., eyes, nose, mouth) and the distances among them, an efficient way to encode and retrieve faces would be through a strategy in which these differences are efficiently computed, using a holistic processing strategy. Similarly, color is an important diagnostic cue for subordinate-level bird identification, and therefore, it is efficient for the expert to incorporate color in their mental representations. Thus, it seems logical that the mechanisms responsible for forming robust object memories code for information that is beneficial for the discrimination of those objects in the domain of expertise. 
In summary, our results support the idea that extensive experience in an object domain can influence the way in which we encode and retrieve objects. Experiments 1 and 2 showed that as a result of extensive experience with birds, color information became a salient feature that was actively and quickly employed during the recognition process. The extent to which object representations incorporate color are constrained by the physical properties of the object category. However, the content of object representations also depends on the keen abilities of the expert, who identifies the relevant, diagnostic cues that distinguish within-category objects. Thus, the processes of perceptual expertise in high-level vision naturally depend on the interaction between the environment and the individual acting upon it. 
Acknowledgments
This work was supported by ARI grant W5J9CQ-11-C-0047. The view, opinions, and/or findings contained in this paper are those of the authors and should not be construed as an official Department of the Army position, policy, or decision. 
Commercial relationships: none. 
Corresponding Author: Simen Hagen. 
Email: shagen@uvic.ca. 
Address: Department of Psychology, University of Victoria, Victoria, Canada. 
References
Akhtar N. Enns J. T. (1989). Relations between convert orienting and filtering in the development of visual attention. Journal of Experimental Child Psychology, 48 (2), 315–334. [CrossRef] [PubMed]
Battig W. F. Montague W. E. (1969). Category norms of verbal items in 56 categories: A replication and extension of the Connecticut category norms. Journal of Experimental Psychology, 80 (3 part 2), 1. [CrossRef]
Biederman I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94 (2), 115. [CrossRef] [PubMed]
Biederman I. Ju G. (1988). Surface versus edge-based determinants of visual recognition. Cognitive Psychology, 20 (1), 38–64. [CrossRef] [PubMed]
Bramão I. Faísca L. Forkstam C. Reis A. Petersson K. M. (2010). Cortical brain regions associated with color processing: An FMRI study. The Open Neuroimaging Journal, 4, 164. [CrossRef] [PubMed]
Bramão I. Reis A. Petersson K. M. Faísca L. (2011). The role of color information on object recognition: A review and meta-analysis. Acta Psychologica, 138 (1), 244–253. [PubMed]
Cavanagh P. (1987). Reconstructing the third dimension: Interactions between color, texture, motion, binocular disparity, and shape. Computer Vision, Graphics, and Image Processing, 37 (2), 171–195. [CrossRef]
Christie J. Klein R. (1995). Familiarity and attention: Does what we know affect what we notice? Memory & Cognition, 23 (5), 547–550. [CrossRef] [PubMed]
Davidoff J. B. Ostergaard A. L. (1988). The role of colour in categorial judgements. The Quarterly Journal of Experimental Psychology, 40 (3), 533–544. [CrossRef] [PubMed]
Gauthier I. Tarr M. J. (1997). Becoming a “Greeble” expert: Exploring mechanisms for face recognition. Vision Research, 37 (12), 1673–1682. [CrossRef] [PubMed]
Gegenfurtner K. R. Rieger J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10 (13), 805–808. [CrossRef] [PubMed]
Goffaux V. Hault B. Michel C. Vuong Q. C. Rossion B. (2005). The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception-London, 34 (1), 77–86. [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1959). Receptive fields of single neurones in the cat's striate cortex. The Journal of Physiology, 148 (3), 574–591. [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1977). Ferrier lecture: Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London. Series B, Biological Sciences, 198, 1–59. [CrossRef] [PubMed]
Jacques C. Rossion B. (2007). Early electrophysiological responses to multiple face orientations correlate with individual discrimination performance in humans. Neuroimage, 36 (3), 863–876. [CrossRef] [PubMed]
Johansen M. K. Palmeri T. J. (2002). Are there representational shifts during category learning? Cognitive Psychology, 45 (4), 482–553. [CrossRef] [PubMed]
Johnson K. Mervis C. (1997). Effects of varying levels of expertise on the basic level of categorization. Journal of Experimental Psychology: General, 126, 248–277. [CrossRef] [PubMed]
Jolicoeur P. Gluck M. A. Kosslyn S. M. (1984). Pictures and names: Making the connection. Cognitive Psychology, 16, 243–275. [CrossRef] [PubMed]
Joseph J. E. (1997). Color processing in object verification. Acta Psychologica, 97 (1), 95–127. [CrossRef] [PubMed]
Joseph J. E. Proffitt D. R. (1996). Semantic versus perceptual influences of color in object recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22 (2), 407. [CrossRef] [PubMed]
Kennett S. Eimer M. Spence C. Driver J. (2001). Tactile-visual links in exogenous spatial attention under different postures: Convergent evidence from psychophysics and ERPs. Journal of Cognitive Neuroscience, 13 (4), 462–478. [CrossRef] [PubMed]
Kuefner D. Cassia V. M. Vescovo E. Picozzi M. (2010). Natural experience acquired in adulthood enhances holistic processing of other-age faces. Visual Cognition, 18 (1), 11–25. [CrossRef]
Lewis D. E. Pearson J. Khuu S. K. (2013). The color “fruit”: Object memories defined by color. PloS One, 8 (5), e64960. [CrossRef] [PubMed]
Livingstone M. Hubel D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240 (4853), 740–749. [CrossRef] [PubMed]
Livingstone M. S. Hubel D. H. (1987). Psychophysical evidence for separate channels for the perception of form, color, movement, and depth. Journal of Neuroscience, 7, 3416–3468. [PubMed]
Marr D. (1982). Vision: A computational investigation into the human representation and processing of visual information. New York: Henry Holt and Co. Inc.
Nagai J. I. Yokosawa K. (2003). What regulates the surface color effect in object recognition: Color diagnosticity or category. Technical Report on Attention and Cognition, 28, 1–4.
Naor-Raz G. Tarr M. J. Kersten D. (2003). Is color an intrinsic property of object representation? Perception, 32 (6), 667–680. [CrossRef] [PubMed]
Oliva A. Schyns P. G. (2000). Diagnostic colors mediate scene recognition. Cognitive Psychology, 41, 176–210. [CrossRef] [PubMed]
Ostergaard A. L. Davidoff J. B. (1985). Some effects of color on naming and recognition of objects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11 (3), 579. [CrossRef] [PubMed]
Price C. J. Humphreys G. W. (1989). The effects of surface detail on object categorization and naming. The Quarterly Journal of Experimental Psychology, 41 (4), 797–828. [CrossRef] [PubMed]
Rosch E. Mervis B. C. M. Gray D. W. Johnson M. D. Boyes-Braem P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. [CrossRef]
Rossion B. Pourtois G. (2004). Revisiting Snodgrass and Vanderwart's object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33 (2), 217–236. [CrossRef] [PubMed]
Schiller P. H. Finlay B. L. Volman S. F. (1976). Quantitative studies of single-cell properties in monkey striate cortex. I. Spatiotemporal organization of receptive fields. Journal of Neurophysiology, 39 (6), 1288–1319. [PubMed]
Stroop J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18 (6), 643. [CrossRef]
Tanaka J. Weiskopf D. Williams P. (2001). The role of color in high-level vision. Trends in Cognitive Sciences, 5 (5), 211–215. [CrossRef] [PubMed]
Tanaka J. W. Curran T. (2001). A neural basis for expert object recognition. Psychological Science, 12 (1), 43–47. [CrossRef] [PubMed]
Tanaka J. W. Farah M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 46 (2), 225–245. [CrossRef] [PubMed]
Tanaka J. W. Presnell L. M. (1999). Color diagnosticity in object recognition. Perception & Psychophysics, 61 (6), 1140–1153. [CrossRef] [PubMed]
Tanaka J. W. Taylor M. (1991). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23, 457–482. [CrossRef]
Townsend J. T. Ashby F. G. (1983). Stochastic modeling of elementary psychological processes. Cambridge, UK: Cambridge University Press.
Wahlheim C. N. Teune R. K. Jacoby L. L. (2011). Birds as natural concepts: A set of pictures from the Passeriformes order. Retrieved from http://psych.wustl.edu/amcclab/AMCC%20Materials.htm.
Wurm L. H. Legge G. E. Isenberg L. M. Luebker A. (1993). Color improves object recognition in normal and low vision. Journal of Experimental Psychology: Human Perception and Performance, 19 (4), 899–911. [CrossRef] [PubMed]
Figure 1
 
Examples of the stimuli used in Experiment 1. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 1
 
Examples of the stimuli used in Experiment 1. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 2
 
Experiment 1: IESs for each group (expert, novice) as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 2
 
Experiment 1: IESs for each group (expert, novice) as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 3
 
Experiment 1: Distribution of IESs for the experts and novices. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 3
 
Experiment 1: Distribution of IESs for the experts and novices. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 4
 
Examples of the stimuli used in Experiment 2. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 4
 
Examples of the stimuli used in Experiment 2. Top row shows the congruently colored birds. Middle row shows the gray scale versions. Bottom row shows the incongruent versions.
Figure 5
 
Experiment 2: IESs for the experts as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 5
 
Experiment 2: IESs for the experts as a function of color condition (congruent, gray scale, incongruent). Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 6
 
Experiment 2: Distribution of IESs as a function of response time for the experts. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Figure 6
 
Experiment 2: Distribution of IESs as a function of response time for the experts. Bin 1 contains the 25% fastest responses of each participant. Bin 2 contains the next 25% fastest responses and so on. Error bars represent standard error. * < 0.05; ** < 0.01; *** < 0.001.
Table 1
 
Response time and accuracy in Experiment 1 for each group (expert, novice) and color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Table 1
 
Response time and accuracy in Experiment 1 for each group (expert, novice) and color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Experts Novices
Condition Percentage correct Response time (ms) Percentage correct Response time (ms)
Congruent 99.6 (0.2) 819 (76) 91.7 (1.3) 1060 (61)
Gray scale 98.7 (0.5) 878 (83) 86.2 (1.3) 1051 (52)
Incongruent 99.0 (0.4) 858 (79) 88.0 (1.5) 1092 (65)
Table 2
 
Response time and accuracy in Experiment 2 for each color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Table 2
 
Response time and accuracy in Experiment 2 for each color condition (congruent, gray scale, incongruent). Notes: Values within brackets represent standard error.
Experts
Condition Percentage correct Response time (ms)
Congruent 95.2 (1.2) 1351 (204)
Gray scale 93.2 (1.4) 1481 (212)
Incongruent 93.4 (1.6) 1466 (204)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×