December 2008
Volume 8, Issue 16
Free
Research Article  |   December 2008
Color aids late but not early stages of rapid natural scene recognition
Author Affiliations
Journal of Vision December 2008, Vol.8, 12. doi:https://doi.org/10.1167/8.16.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angela Y. J. Yao, Wolfgang Einhäuser; Color aids late but not early stages of rapid natural scene recognition. Journal of Vision 2008;8(16):12. https://doi.org/10.1167/8.16.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Color has an unresolved role in natural scene recognition. Whereas rapid serial visual presentation paradigms typically find no advantage for colored over grayscale scenes, color seems to play a decisive role for recognition memory. The distinction between detection and memorization has not been addressed directly in one paradigm. Here we asked ten observers to detect animals in 2-s 20 Hz sequences. Each sequence consisted of two 1-s segments, one of grayscale images and one of colored; each segment contained one or no target, totaling zero, one, or two targets per sequence. In one-target sequences, hit rates were virtually the same for targets appearing in the first or second segment, as well as for grayscale and colored targets, though observers were more confident about detecting colored targets. In two-target sequences, observers preferentially reported the second of two identical targets, in comparison to categorically related (same-species animals) or unrelated (different-species animals) targets. Observers also showed a strong preference for reporting colored targets, though only when targets were of different species. Our findings suggest that color has little effect on detection, but is used in later stages of processing. We may speculate that color ensures preferential access to or retrieval from memory when distinct items must be rapidly remembered.

Introduction
Color vision provides humans with unquestionably rich sensory access to the surrounding environment, though why it has evolved to its present state and its exact purpose remains a matter of debate. Evolutionarily, mammalian vision has simplified to dichromacy or monochromacy, though primates remain an exception, having re-evolved trichromacy (Bowmaker & Hunt, 2006). Trichromatic vision is believed to assist primates with foraging tasks such as detecting ripe fruit and distinguishing young leaves against a background of mature leaves (Lucas et al., 2003). Recently, this hypothesis has been challenged by the idea that primate trichromacy was evolutionarily selected to discriminate emotional states and socio-sexual signals from skin hues, as the majority of trichromatic primates are bare-faced (Changizi, Zhang, & Shimojo, 2006). 
In parallel with the debate on the functional purpose of color vision is the role of color in natural scene processing. Two distinct types of tasks are typically used to study natural scene processing: on the one hand rapid-presentation paradigms with target detection and scene categorization tasks, and on the other hand recognition memory paradigms with delayed-match-to-sample tasks. Rapid presentation paradigms testing color effects have yielded conflicting results which are highly task-dependent. Two studies to date have directly addressed the role of color for target detection in natural scenes (Delorme, Richard, & Fabre-Thorpe, 2000; Fei-Fei, VanRullen, Koch, & Perona, 2005) and both found null effects. Fei-Fei et al. used a go/no-go detection task with a speeded response to detect animal targets and found that subjects performed equally well with grayscale and colored stimuli in briefly presented scenes (27 ms) placed peripherally on a screen. Delorme et al. had both humans and rhesus monkeys perform a go/no-go detection task requiring a speeded response to detect either food or animal targets on briefly flashed images (32 ms). They found that both accuracy and response times were virtually the same for grayscale and colored animal targets and only slightly reduced for grayscale stimuli of food targets. Delorme postulated that in at least “ultra-rapid” visual categorization tasks (characterized by response times less than 360 ms in humans), color cues have a very minor role. Processing surface features such as color is slower and more time-consuming, which may be why both humans and monkeys were slower at detecting food targets, for which color is a more relevant and diagnostic feature. Indeed, color seems to have a more decisive role in scene categorization (e.g. beach, mountain, forest) tasks (Goffaux et al., 2005; Oliva & Schyns, 2000; Vogel, Schwaninger, Wallraven, & Bülthoff, 2007). Reaction times in these tasks were slower (>400 ms) than the “ultra-rapid” category and observers were typically faster and more accurate when color diagnostic scenes were presented in original coloring rather than in grayscale or in abnormal coloring, though exceptions have also been noted (Vogel et al., 2007). 
Recognition memory paradigms have also been used to study the role of color in natural scene processing. Typical recognition memory paradigms consist of some form of a delayed-matching-to-sample task. Images are presented either separately as sample image and test image sets, or mixed into a continuous image stream. Observers have been found to be 5–10% more accurate on color-sample/color-test image than all other sample/test combinations (Gegenfurtner & Rieger, 2000; Spence, Wong, Rusan, & Rastegar, 2006; Wichmann, Sharpe, & Gegenfurtner, 2002) except in one study which found better accuracy for grayscale-sample/grayscale-test combination from a lower false-positive rate (Nijboer, Kanai, de Haan, & van der Smagt, 2008). Color has been proposed to facilitate recognition memory on a sensory level, at encoding, by improving edge detection and segmentation as well as on a cognitive level by being bound as a part of the memory representation (Gegenfurtner & Rieger, 2000; Wichmann et al., 2002). Spence et al. (2006) proposed that of the two types of facilitation, cognitive effects, specifically, encoding specificity effects, are dominant. Encoding specificity refers to having a match in stimuli features at encoding (sample) and at retrieval (test). In Spence's study, observers performed best on color-sample/color-test combinations, but still significantly better on grayscale-sample/grayscale-test combinations than on mixed combinations. Though Nijboer's results contradict the other studies, one interesting finding was that any advantages (or disadvantages, according to their study) conferred through color disappeared once the scenes were changed to scenes without a readily nameable gist. 
Two separate accounts have been proposed to resolve color's role in object recognition. There are the “edge-based” theories, in which recognition is believed to be mediated—at least initially—solely on the basis of shape or contour information (Biederman, 1987; Biederman & Ju, 1988; Ullman, 1984). Surface features, such as color and texture, are secondary and have null effects. Hence, edge-based theories predict that objects presented in color and grayscale should be recognized with equal ease and speed. In direct competition with edge-based theories are the “surface-based” or “surface-plus-edge-based” theories, which maintain that surface features are just as critical in object representation as shape and contours and that colored objects should be recognized easier and faster (Bruner, 1957; Price & Humphreys, 1989; Tanaka, Weiskopf, & Williams, 2001). 
The difficulty in assessing these two competing groups of theories arises from a number of uncontrolled factors, which, once taken into account, may make the two not entirely exclusive of each other. First, different tasks, such as verification, classification and naming, may probe various stages of object recognition to different extents, explaining why color has been observed as a significant effect in some studies but not in others. In particular, object naming is believed to have higher reliance on surface information than verification (Brodie, Wallace, & Sharrat, 1991) or classification (Joseph & Proffitt, 1996; Tanaka & Presnell, 1999) and in object naming tasks, observers performed better with color stimuli (Davidoff & Ostergaard, 1988). Second, the control in stimuli content, often up to the discretion of experimenters who select the image set, appears to be highly influential in experimental results. Even proponents of the edge-based theories acknowledge that color cues improve recognition in situations where object representation in the stimuli is degraded in some manner. For example, color information adds a significant benefit in cases where observers have low vision (Wurm, Legge, Isenberg, & Luebker, 1993), suffer from pathological conditions such as visual agnosia (Mapelli & Behrmann, 1997), or where object shape is degraded from occlusion (Tanaka & Presnell, 1999). Color has also been found to be useful in classification tasks where objects bear similar shapes, for example, when distinguishing specific bird species (Price & Humphreys, 1989) or types of fruit (Tanaka & Presnell, 1999). Finally, the color diagnosticity seems to determine the extent that color influences object recognition. Color diagnosticity is defined as the degree to which color is integrated with an object's identity. For example, carrots are high in color diagnosticity and are invariably always orange colored, while cars are low in color diagnosticity and can be found in a variety of different colors. Objects bearing high color diagnosticity are identified faster and more accurately when presented in color, while objects bearing low color diagnosticity have either less (Rossion & Pourtois, 2004) or null (Tanaka & Presnell, 1999) color effects. Closely related to color diagnosticity is object origin as either naturally occurring or man-made. Natural objects typically have higher color consistency and therefore stronger associations with color, while man-made objects have more randomized coloring and are less influenced by color in object recognition (Humphrey, Goodale, Jakobson, & Servos, 1994; Price & Humphreys, 1989). It should be noted that in these object recognition studies, a variety of stimuli are used, such as photographs, illustrations, and line drawings, but in all cases, objects are cut away from the background and displayed in isolation. Natural scenes, on the other hand, are more complex and cluttered; in addition to foreground objects, there may also be a variety of background structures, surfaces, and textures. 
Given the myriad of contradictory results, the more pertinent question to ask may be under which circumstances and to what extent does color influence object recognition. Part of the confusion stems from the vagueness of the term “recognition.” Recognition has been used to describe a variety of experimental tasks such as detection, categorization and identification, when each of these tasks most likely probe only a small part of the sequence from feature detection and binding, consolidation into memory and then conscious retrieval from memory for response. Our study addresses the role of color during detection as compared to later stages of recognition by using a two-target rapid serial visual presentation (RSVP) paradigm. “Detection” here refers to feature detection and binding, while later stages include the consolidation to and retrieval from memory for response. Observers were asked to detect and then report the presence of animal images in grayscale and color stimuli. Detection is assessed through target report percentages while later processing is probed by subjects rating the confidence of their response and by varying the relationship between target pairs within each RSVP trial. Target pairs were identical (same target image), related (same animal species or category, different target image) or unrelated (different animal species). To our knowledge, this is the first study which addresses the role of color explicitly for the various phases of recognition within a single experimental framework. 
Methods
Subjects
Ten student volunteers (six male, ages 22–29, mean age 25) from ETH, naïve to the purposes of the study were recruited. All volunteers had normal or corrected-to-normal vision and provided informed consent for participation. 
Stimuli and setup
The images used as stimuli in this study came from a data set of natural-scene colored photographs used in prior RSVP studies (Evans & Tresiman, 2005; Li, VanRullen, Koch, & Perona, 2002; downloadable from http://visionlab.ece.uiuc.edu/datasets.html). The animal category contained 1323 images of one or several mammals, birds, fish, insects, or reptiles. In the images, the animals are found at different scales, viewing angles, and positions. The non-animal distractor category contained 1123 images of natural landscapes and urban cityscapes, houses and buildings, fruits and plants, and other artificial objects. Grayscale images were generated from the colored images using the rgb2gray function in the Image Processing toolbox of Matlab (Mathworks, Natick, MA). 
Stimuli were presented on a 19-inch CRT monitor (CTX, Hsin-tien, Taiwan) with 1024 × 768 resolution and 120 Hz refresh rate. The experiment was programmed using Matlab and the Psychophysics Toolbox Version 3 (Brainard, 1997; Pelli, 1997; downloadable from http://psychtoolbox.org/). All images (384 × 256 pixels, visual angle of 12 × 8 degrees at a viewing distance of 50 cm) were presented in the middle of a black screen. 
Observers were seated in a semi-darkened room with some residual lighting to illuminate the keyboard for keying responses. To avoid confusion from distractor images containing humans, subjects were instructed explicitly that humans did not count as animals. 
Experimental paradigm
For each observer, 620 image sequences of 40 frames were displayed at 20 Hz. Each sequence was preceded by a gray lead-in image, the same size as the stimuli, displayed for 500 ms and then succeeded by the same gray image for 100 ms after trial end. The 20 Hz presentation rate was chosen to have a good balance of hit and miss trials for analysis given the same stimuli set (Einhäuser, Mundhenk, Baldi, Koch, & Itti, 2007). Each sequence was divided into two halves, one consisting of 20 grayscale images and the other of 20 colored images. Each half could contain one target image, totaling zero, one, or two target images per sequence. There were 160 zero-target sequences, 160 one-target sequences, and 300 two-target sequences; for each type of sequence, half were chosen randomly to have grayscale images in the first 20 frames and color images in the second 20 frames and vice versa for the other half. Of the two-target sequences, there were 100 sequences each with identical, related, or unrelated target pairs. Identical target pairings were made by repeating the same target image. Related target pairs were formed by randomly drawing two targets from a single animal species (entries in Table 1). Unrelated target pairs were formed by randomly drawing two targets from different species, excluding same or similar species, i.e. sharing the same “prefix” in Table 1. The constraints for related and unrelated were selected manually prior to the experiment to ensure a clear distinction between the related and unrelated animal categories. 
Table 1
 
Relatedness levels of animal targets and occurrence frequencies enclosed in parentheses. “Related” target pairs were formed by randomly drawing two targets from a single entry. “Unrelated” target pairs were formed by randomly drawing two targets from two different entries, each of which could not share a common prefix (e.g. dog—domestic could not be paired with dog—wolf). Note that species were categorized prior to the experiment according to visual similarity, and do not necessarily reflect biological relation.
Table 1
 
Relatedness levels of animal targets and occurrence frequencies enclosed in parentheses. “Related” target pairs were formed by randomly drawing two targets from a single entry. “Unrelated” target pairs were formed by randomly drawing two targets from two different entries, each of which could not share a common prefix (e.g. dog—domestic could not be paired with dog—wolf). Note that species were categorized prior to the experiment according to visual similarity, and do not necessarily reflect biological relation.
Animal categories
dog—domestic (24) bear—brown (11) turtle (9)
dog—fox (25) bear—polar (20) rabbit (6)
dog—wolf (38) hippo (8)
marine—seal (5) horse (27)
deer—ram (18) marine—walrus (12) panda (4)
deer—caribou (23) marine—penguin (17) raccoon (12)
deer—gazelle (17) marine—whale (9) rhino (21)
deer—giraffe (3) marine—shark (7) bison (12)
deer—deer (13) kangaroo (4)
bug—caterpillar (3) koala (11)
bird—eagle (27) bug—butterfly (17) elephant (29)
bird—owl (15) bug—other (2) fish (10)
bird—flying (19) frog (3)
bird—nest (11) cat—domestic (9) snake (18)
bird—water (14) cat—leopard (24)
bird—other (56) cat—lion (10)
bird—turkey (5) cat—lynx (33)
cat—panther (4)
cat—tiger (30)
Of the one-target sequences, the target image was uniformly distributed between frames 6–15 and 26–35 such that half of the sequences had grayscale targets and half had color targets. Of the two-target sequences, the first target (T1) was placed at random between frames 6–15 and the second target (T2) was placed between frames 26–35, resulting in one grayscale target and one colored target in each two-target sequence ( Figure 1). To prevent attentional blink effects, T2's appearance in the sequence was chosen to maintain at least a 1-s separation following T1. No target images were used in more than one trial and no distractor images were used more than once within each trial. 
Figure 1
 
Paradigm. Schematic depiction of the animal-recognition RSVP task. Forty images are presented for 50 ms each; the first half (20 images) are presented in grayscale and the second half in color or vice versa. Each half can contain 0 or 1 target (animal) frame, yielding 0, 1, or 2 targets per trial. Possible target frames range from 6 to 15 (for T1) and from 26 to 35 (for T2), ensuring at least 1 s between the targets in two-target trials, and at least 250 ms of frames in the same color before and after each target. In the example shown, target relatedness between T1 and T2 is “identical.”
Figure 1
 
Paradigm. Schematic depiction of the animal-recognition RSVP task. Forty images are presented for 50 ms each; the first half (20 images) are presented in grayscale and the second half in color or vice versa. Each half can contain 0 or 1 target (animal) frame, yielding 0, 1, or 2 targets per trial. Possible target frames range from 6 to 15 (for T1) and from 26 to 35 (for T2), ensuring at least 1 s between the targets in two-target trials, and at least 250 ms of frames in the same color before and after each target. In the example shown, target relatedness between T1 and T2 is “identical.”
Observers began each trial with a button press and were free to take breaks as needed. At the end of each trial, the observers were first asked about the number of target images detected (“zero”, “one”, or “two”) and then about the color (“color” or “grayscale”) of the target image if one target was detected. Finally, observers were asked to rank the confidence of their response for each target detected or not detected within a trial. Confidence was ranked on a scale from 1 to 3, 1 being “guess”, 2 being “somewhat sure”, and 3 being “absolutely sure”. To facilitate inter-observer comparison, the same 620 sequences were shown to all observers, though in a different random ordering. To prevent boredom and fatigue effects, the experiment was conducted in two sessions on different days. After both sessions were completed, observers gave a verbal debrief of their impressions and experiences during the experiment. 
Signal detection theory analysis
To quantify target detection performance from zero- and one-target trials, we used the d′ sensitivity measure from signal-detection theory. d′ is defined as the difference in the z-score between the true positive (a target is correctly detected) and false alarm (a target is reported, though absent) trials; the higher d′ is, the more readily detectable a target is. Instances of zero false alarm trials (where the z-score would go to −∞) were corrected for by adjusting to 0.5 (Stanislaw & Torodov, 1999). 
The report on target presence or absence alone only provides us with a single point of the receiver operator characteristics (ROC) curve of each observer. Hence, we further expanded the analysis on target detection by using the confidence ratings when constructing ROC curves. In this analysis, each point on the ROC corresponds to a true positive and false alarm rate for a specific confidence rating and in total, there are six points per ROC estimate. To compare the ROC curves, we estimated the area under the ROC curve (AUC). The AUC falls between 0.5 and 1, where 0.5 indicates chance levels of detection and 1 indicates perfect detection with no false alarms; the higher the AUC, the better the detection. We estimated AUC by a linear trapezoidal method, where each successive point on the ROC curve was connected linearly to the prior point. 
Results
Ten human observers were asked to decide in 620 sequences of natural scenes whether zero, one, or two targets had been presented, declare in case of a single detection the target's color (“color” or “grayscale”), and rate their confidence. 
Zero- and one-target sequences
Although the main analysis focuses on the interaction of and preference for targets in two-target sequences, the zero- and one-target sequences serve as baseline. If observers responded “two targets” in zero or one-target trials, it is not evident which part of the sequence triggered this false alarm. Similarly, if observers correctly reported “one target” but named the wrong color, it is unclear whether the target was correctly detected and color naming failed, or whether there was the combination of a false alarm and a miss. As these two types of responses constitute only 6.6% of all the 10 × 320 zero- and one-target sequences when averaged across observers, we discarded these rare events from further analysis. 
In the zero- and one-target sequences, all observers performed well above chance but responded conservatively with much more misses (false negatives) than false positives. Across all these sequences, hit (true positive) rates ranged from 49.3% to 74.5% ( M = 62%, SD = 7.9%), while false positive rates ranged from 2.5% to 24.3% ( M = 10.8%, SD = 6.5%). We split the analysis according to serial position (T1 versus T2) and color (color versus grayscale) targets and reports. To quantify target-detection, we compute d′ from zero- and one-target sequences ( Table 2) in these conditions. 
Table 2
 
Sensitivity measure d′ estimated from the zero- and one-target trials, split according to target serial position (T1 and T2) and target color (color and grayscale).
Table 2
 
Sensitivity measure d′ estimated from the zero- and one-target trials, split according to target serial position (T1 and T2) and target color (color and grayscale).
Observer T1 color T2 color T1 grayscale T2 grayscale
1 1.26 1.48 0.82 1.50
2 1.03 1.09 1.49 1.50
3 1.26 1.05 1.32 2.15
4 2.01 2.08 1.33 1.95
5 2.09 2.01 1.49 2.09
6 2.05 2.21 1.77 1.28
7 1.70 1.70 1.61 1.90
8 2.33 1.36 1.93 1.48
9 1.63 2.05 1.27 2.29
10 1.61 1.42 1.81 2.48
A two-way ANOVA testing for the effects of target serial position (T1 versus T2) and target color (color versus grayscale) on d′ shows no significant main effects (target serial position: F(1,36) = 2.95, p = 0.095; target color: F(1,36) = 1.70, p = 0.20) nor any interaction effects ( F(1,36) = 0.0005, p = 0.98). The lack of interaction allows us to analyze the effects of color pooled across T1 and T2 serial position. Plotting hit rates (1 minus miss rate) versus false alarm rates (1 minus correct rejections) for each observer split by color shows that observers' judgements were similar in their conservative nature for color targets ( Figure 2A; hit rates: M = 61.3%, SD = 7.0%; false alarm rates: M = 8.5%, SD = 5.4%) and grayscale targets ( Figure 2B; hit rates: M = 65.7%, SD = 9.8%; false alarm rates: M = 13.1%, SD = 7.9%). False alarms were attributed according to the reported target color. Using the confidence ratings to obtain a six-point ROC curve for each observer ( Figure 2C), we found no significant differences in the AUCs for target serial position ( Figure 2D; t(9) = 0.42, p = 0.68, paired t-test) nor for target color ( Figure 2D; t(9) = 0.82, p = 0.43). In sum, this analysis shows no evidence for an effect of color or serial position in one-target trials; in particular, color does not aid the detection of a single target. 
Figure 2
 
Zero- and one-target trial performance. Receiver-operator characteristic (ROC) for zero- and one-target sequences split according to (A) color targets and (B) grayscale targets. False alarms are attributed according to reported color. Each data point denotes one individual. Chance performance is indicated by the dashed line. Only binary responses, but not confidence ratings were used for these plots. (C) ROC for color versus grayscale targets and associated AUC for observer 8. Each point corresponds to one confidence level, yielding 6 points per ROC. Curve was plotted by joining each point on the ROC linearly. (D) AUC comparison across target serial position on the left and target color on the right; mean and standard error over observers.
Figure 2
 
Zero- and one-target trial performance. Receiver-operator characteristic (ROC) for zero- and one-target sequences split according to (A) color targets and (B) grayscale targets. False alarms are attributed according to reported color. Each data point denotes one individual. Chance performance is indicated by the dashed line. Only binary responses, but not confidence ratings were used for these plots. (C) ROC for color versus grayscale targets and associated AUC for observer 8. Each point corresponds to one confidence level, yielding 6 points per ROC. Curve was plotted by joining each point on the ROC linearly. (D) AUC comparison across target serial position on the left and target color on the right; mean and standard error over observers.
Two-target sequences
In the 300 two-target sequences, each colored and grayscale target pairing could be identical, related or unrelated. First we assess hit rates for individual targets. Since the mix-up of colors was very rare in one-target trials, we can safely assume that the color report in a two-target trial reliably identifies which target was detected and which one was missed. The number of two-target trials with zero, one or two hits varies depending on the relatedness of the targets ( Figure 3). The number of sequences in which zero or two targets are detected is higher the more related the targets are (unrelated < related < identical); consequently the fraction of one-hit trials decreases with relatedness (identical < related < unrelated). The higher number of trials with zero and two targets detected for related and identical targets than for unrelated targets is most likely due to target-specific difficulties. If a certain target is missed the first time, then a related or identical target is likely to be missed again and vice versa. As no apparent trends were found when comparing the species of missed and detected animal targets, it is likely that this inherent ease or difficulty reflects an observer-specific preference. Overall, the different response classes (zero, one, or two hits) in two-target trials are not biased toward one animal category, and for each level of relatedness (identical, related, and unrelated), there are sufficient data for each response. 
Figure 3
 
Individual hits and categorical relatedness. Number of trials with 0, 1, or 2 hits for unrelated, related, and identical two-target sequences. Numbers provide mean and standard error across observers.
Figure 3
 
Individual hits and categorical relatedness. Number of trials with 0, 1, or 2 hits for unrelated, related, and identical two-target sequences. Numbers provide mean and standard error across observers.
To statistically quantify the effects of categorical relatedness (unrelated, related, or identical), target serial position (T1 or T2), and target color (color or grayscale) as well as their interactions in two-target trials across subjects, we perform three series of analyses. First, we analyze all two-target trials; second, we consider only two-target trials in which the response was “1”, i.e. exactly one target was hit; third, we normalize the hit-rates within each subject and relatedness category. 
As a first analysis we perform a three-way ANOVA on the number of hits with the factors relatedness, serial position, and color on all two-target trials. We find no significant three-way interaction ( F(2,108) = 0.16, p = 0.85), but significant two-way interactions between relatedness and serial position ( F(2,108) = 3.58, p = 0.03) as well as between relatedness and color ( F(2,108) = 4.96, p = 0.009). We find a main effect for serial position ( F(1,108) = 4.07, p = 0.046) and a slight trend to a main effect of color ( F(1,108) = 3.18, p = 0.077) but none for relatedness ( F(2,108) = 0.37, p = 0.69). The two-way interactions suggest separate analyses for each relatedness category, using two-way ANOVAs. As expected from the lack of a three-way interaction, we do not find a significant interaction between color and serial position in any of the three relatedness categories (identical: F(1,36) = 0.67, p = 0.42; related: F(1,36) = 0.17, p = 0.68; unrelated: F(1,36) = 2.56, p = 0.12). A main effect for serial position ( Figure 4A) is found only for identical targets ( F(1,36) = 6.00, p = 0.02), but not for related ( F(1,36) = 2.43, p = 0.13) or unrelated ( F(1,36) = 1.41, p = 0.24) targets. In the case of identical targets, the second target in a sequence obtains higher hit-rates ( M = 65.1%, SD = 7.8%) than the first one ( M = 58.2%, SD = 7.9%). Conversely, a main effect of color ( Figure 4B) is found only for unrelated targets ( F(1,36) = 20.18, p < 0.0001), but not for identical ( F(1,36) = 0.15, p = 0.70) or related ( F(1,36) = 0.64, p = 0.43) targets. In the case of unrelated targets, color obtains higher hit rates ( M = 65.8%, SD = 5.4%) than grayscale ( M = 57.1%, SD = 5.6%) targets. In summary, this analysis shows that if there are two targets in a sequence, there are more frequent reports for colored targets, if and only if the targets are unrelated and for T2 targets, if and only if the targets are identical. 
Figure 4
 
Preference in two-target trials. (A, B) All two-target trials, preference for target serial position (panel A) and (B) target color (panel B). (C, D) Two-target trials with exactly one hit, preference for target serial position (C) and target color (D). (E, F) Percentage of one-hit trial with T2 (E) and color target (F) hits; percentages are normalized within observers and relatedness categories; dashed line indicates equal preference. In all panels, bars and error bars indicate mean and standard error over observers. Significance markers refer to two-way ANOVA main effects in each relatedness category (see text for details).
Figure 4
 
Preference in two-target trials. (A, B) All two-target trials, preference for target serial position (panel A) and (B) target color (panel B). (C, D) Two-target trials with exactly one hit, preference for target serial position (C) and target color (D). (E, F) Percentage of one-hit trial with T2 (E) and color target (F) hits; percentages are normalized within observers and relatedness categories; dashed line indicates equal preference. In all panels, bars and error bars indicate mean and standard error over observers. Significance markers refer to two-way ANOVA main effects in each relatedness category (see text for details).
The analysis so far has assumed that performance on T2 is independent of performance on T1 in any given trial, and considered all two-target trials. As an alternative analysis we consider only the two-target trials in which the response is “1”, i.e. one target was hit, while the other target was missed. We are particularly interested in this subset of trials because they allow us to explicitly determine target preferences with respect to target color and/or serial position under the different relatedness conditions. Qualitatively, the results are identical to those of all two-target sequences. In total, there are 106.1 ± 19.2 ( M ± SD) two-target trials per subject in which exactly one target is reported. A three-way ANOVA (relatedness × serial position × color) does not show a significant three-way interaction ( F(2,108) = 1.73, p = 0.18) and of the two-way interactions, only those between relatedness and serial position ( F(2,108) = 7.16, p = 0.001) and relatedness and color ( F(2,108) = 9.91, p = 0.0001) are significant, but not between serial position and color ( F(1,108) = 0.67, p = 0.41). In this analysis, all main effects exhibit significance (relatedness: F(2,108) = 20.71, p < 0.0001; serial position: F(1,108) = 8.12, p = 0.005; color: F(1,108) = 6.36, p = 0.01). The observed main effect of relatedness, which was absent in the analysis for all two-target trials above, reflects the difference in the overall number of one-hit two-target trials between relatedness categories (cf. Figure 3). Other significant main effects are consistent with the previous analysis, with overall preferences for T2 ( M = 57.4, SD = 10.8) over T1 ( M = 48.7, SD = 10.8) and color targets ( M = 56.9, SD = 11.0) over grayscale ( M = 49.2, SD = 10.8). 
In the light of the two-way interactions between relatedness and serial position and relatedness and color, we again split this analysis by relatedness categories. As expected from the lack of three-way interaction, there are no significant two-way interactions between serial position and color for any relatedness category (identical: F(1,36) = 0.15, p = 0.70; related: F(1,36) = 3.64, p = 0.06; unrelated: F(1,36) = 0.03, p = 0.86). For target serial position ( Figure 4C), we find significant main effects for identical ( F(1,36) = 14.60, p = 0.0005) and related targets ( F(1,36) = 4.99, p = 0.03), but not for unrelated ones ( F(1,36) = 1.97, p = 0.17); in both significant cases the second target is preferred over the first one (identical: M = 17.3, SD = 4.6 for T2, M = 10.4, SD = 4.2 for T1; related: M = 19.4, SD = 5.8 for T2, M = 15.3, SD = 4.4 for T1). For target color ( Figure 4D), we find a significant main effect only for unrelated targets ( F(1,36) = 28.14, p < 0.0001), but not for identical ( F(1,36) = 0.37, p = 0.55) or related ( F(1,36) = 1.31, p = 0.26) targets. Again, colored targets ( M = 26.2, SD = 5.1) are preferred over unrelated grayscale ones ( M = 17.5, SD = 4.0). In summary, the analysis of two-target trials with exactly one hit confirms the preference for the T2 in case of identical targets and finds a slightly weaker preference for T2 in case of related targets. More importantly, however, this analysis confirms the preference to recall the colored target, if and only if the two targets are unrelated. 
Finally, to account for inter-individual performance differences, we analyze the same data relative to the number of two-target trials in which one target was reported, normalized individually within each observer and relatedness level. If there were no preference for T2 in the report-one two-target trials, then the percentage of T2 hits would be 50%; if all of the report-one two-target trials were for T2, then this number would be 100%. We find a significant preference for T2 only for identical targets well above chance of 50% ( M = 63.0%, SD = 10.0%; t(9) = 4.11, p = 0.003, t-test; Figure 4E), but only a trend for related ( M = 55.5%, SD = 7.7%; t(9) = 2.23, p = 0.053) and a reverse trend for unrelated ones ( M = 47.7%, SD = 3.0%; t(9) = −2.45, p = 0.04). Of the one-target trials in which one target was correctly reported, there was no significant preference ( M = 48.6%, SD = 4.0%; t(9) = −1.11, p = 0.30). Performing the same normalized analysis for color, we find that the percentage of color hits is also well above chance ( M = 60.0%, SD = 5.6%; t(9) = 5.65, p = 0.0003; Figure 4F). In contrast, there is no significant deviation from chance for identical ( M = 51.5%, SD = 12.4%; t(9) = 0.39, p = 0.71) or related ( M = 47.3%, SD = 5.1%; t(9) = −1.7, p = 0.13) targets. The analogous one-target trial analysis as control does not show any significant preference for color ( M = 49.6%, SD = 2.9%; t(9) = −0.44, p = 0.69). Overall, the normalized analysis again confirms the robust preference for color, which is only observed for unrelated targets. 
To test whether the strongest observed effects, the preference for the second identical target and the preference for color in unrelated targets, are robust across individuals, we analyze each observer's result individually, again for the one-hit two-target trials. For identical targets, there is preference for T2 in nine out of ten observers ( Figure 5A). For unrelated targets, all ten observers show a preference for color ( Figure 5B). This shows that the results obtained on average across all observers are robust and hold consistently across individuals. Our results show that depending on categorical relatedness, colored or later targets are preferentially recalled, although neither is easier to detect per se. As such, this shows that both preferences are a result of a stage in the recognition cascade later than detection. 
Figure 5
 
Preference of individual observers in one-hit two-target trials. (A) Number of trials with T1 target hits versus T2 target hits for two-target sequences with two identical targets and 1 hit. (B) Number of trials with color target hits versus grayscale target hits for two-target sequences with two unrelated targets and 1 hit.
Figure 5
 
Preference of individual observers in one-hit two-target trials. (A) Number of trials with T1 target hits versus T2 target hits for two-target sequences with two identical targets and 1 hit. (B) Number of trials with color target hits versus grayscale target hits for two-target sequences with two unrelated targets and 1 hit.
Confidence
In a verbal debrief after completing the experiment, seven of the ten observers noted that color targets were “easier” to detect than grayscale targets. These statements suggested that color could boost confidence, despite having no effect on performance, as reflected in the lack of differences between AUCs. To remove possible confounding effects from confidence–accuracy relationships, we considered only the hit trials. Confidence was assessed by looking at the number of hit trials receiving a confidence rating of 3 as a fraction of all hit sequences. This type of normalization accounts for observer-to-observer variability, and assumes that each observer applies the same internal judgment criteria for both color and grayscale targets. The fraction of each observer, for the one-target trials, is plotted in Figure 6A, where the identity line indicates equal confidence for grayscale and color targets. Nine of the ten observers fall below the identity line, indicating that they were generally more confident when recognizing color targets than grayscale targets ( p = 0.02, sign test). 
Figure 6
 
Confidence. Fraction of one-hit trials with confidence rating “3” split by color ( x-axis) and grayscale targets ( y-axis); data points denote individuals, points above diagonal express higher confidence in grayscale, points below in color targets (A) one-target sequences (B) two-target sequences, marker denotes relatedness.
Figure 6
 
Confidence. Fraction of one-hit trials with confidence rating “3” split by color ( x-axis) and grayscale targets ( y-axis); data points denote individuals, points above diagonal express higher confidence in grayscale, points below in color targets (A) one-target sequences (B) two-target sequences, marker denotes relatedness.
The same confidence measure was applied to two-target sequences with one hit ( Figure 6B) and similar to the one-target sequences, the majority of observers were more confident about hits on color targets than grayscale targets. Furthermore, observers were the most confident about unrelated targets but less confident about related targets and the least confident about identical targets, as indicated by the clustering pattern observed in the confidence results ( Figure 6B). The shift in confidence may be related to the difference in proportion of sequences with zero versus one versus two hits in the various relatedness categories. In the related and identical categories, observers tend to either miss both or detect both targets and thus might be more uncertain when they detect only one of the two targets. In summary, observers are generally more confident in recalling colored than grayscale items, irrespective of whether or not their objective performance differs. In line with the color preference for unrelated targets, this argues in favor of a comparably late role for color in the processing pipeline. 
Discussion
The present study investigates the role of color in rapid recognition. Using one-target sequences, we find that neither color nor serial position influences target detection rates, but color modulates the subjective confidence in one's judgement. When two distinct items are presented, however, the colored target is preferentially reported, whereas for two identical items the one presented last is remembered best. 
The one-target trials show a null effect of color. This is in line with some earlier studies using target-detection tasks (Delorme et al., 2000; Fei-Fei et al., 2005), but seems in conflict with others that used scene categorization tasks (Goffaux et al., 2005; Oliva & Schyns, 2000). Effects of color were usually found, however, when scenes or objects were color diagnostic or the task required detailed perceptual analysis. In the light of our results, such paradigms probably probe a later processing stage more strongly than the rapid recognition tasks of (Delorme et al., 2000; Fei-Fei et al., 2005). Hence our finding that color plays a role only late in processing reconciles these seemingly contradictory views and highlights the distinction between detection and later stages of visual processing. 
We speculate that the dependence on categorical relatedness may be a memory limitation of recognition: in case one distinct item has to be remembered (one target or two identical targets) only detection is required for a successful report, whereas in the two-item case (unrelated targets), there is a greater demand on memory. Categorically “related” targets in this case are treated similarly to identical targets, suggesting that memory may operate on a superordinate level (category rather than exemplar). This interpretation is in line with findings on short-term memory for word recall experiments (Poirier & Saint-Aubin, 1995). In this view, color ensures preferential access to memory when two distinct items, i.e. two unrelated targets, must be remembered. From our current study alone, it is not clear if the preferential access occurs at the encoding or retrieval stage, though previous recognition memory studies with natural scenes have suggested that color benefits both (Gegenfurtner & Rieger, 2000). While further research will be needed to uncover to what degree color aids each stage of the recognition cascade, our results clearly show that it has no effect on the initial stage, target detection. 
The benefits of color for reporting an item under high memory load is reminiscent of the increased attentional demands when more than one item needs to be reported. While the decision whether a single image contains an animal can be taken in the (near) absence of attention (Li et al., 2002; Rousselet, Fabre-Thorpe, & Thorpe, 2002), there are clear limitations in the number of items to be processed in parallel (Rousselet, Thorpe, & Fabre-Thorpe, 2004). On this basis, the notion of ultra-rapid natural scene processing itself has been challenged (Walker, Stafford, & Davis, 2008). However, as Evans and Treisman (2005) have pointed out based on attentional blink experiments with natural scenes, it is important to distinguish between detection (animal or no animal) and identification (name of animal) when considering attentional demands for natural scene processing. Our results show a similar distinction for the case of color, which is—as is attention—only beneficial late in processing. Although based on our experiment alone we make no claim that color effects and attentional effects are directly related, both lines of research highlight the dissociation between detection and final report. 
From our data, we conclude that color only ensures preferential access to and/or retrieval from memory, but does not ease detection per se. The fact that color boosts confidence renders it likely that both items gain access to memory, but the color target is preferentially retrieved and thus reaches the (conservative) criterion for report more frequently. Our key finding is that in the recognition cascade from detection over recognition proper to memorization, retrieval, and report, color is only needed in the later stages, possible reconciling results previously thought to be in conflict. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Angela Yao. 
Email: yaoa@vision.ee.ethz.ch. 
Address: Computer Vision Laboratory, Sternwartstrasse 7, ETH-Zentrum, CH-8092 Zürich, Switzerland. 
References
Biederman, I. (1987). Psycholo. [.
Biederman, I. Ju, G. (1988). Cognitive Psychol. [.
Bowmaker, J. K. Hunt, D. M. (2006). Evolution of vertebrate visual pigments. Current Biology, 16, R484–R489. [PubMed] [Article] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Brodie, E. E. Wallace, A. M. Sharrat, B. (1991). Effect of surface characteristics and style of production on naming and verification of pictorial stimuli. American Journal of Psychology, 104, 517–545. [PubMed] [CrossRef] [PubMed]
Bruner, J. (1957). Contemporary approaches to cognition: A report of a symposium at the University of Colorado, May 12–14, 1955,
Changizi, M. A. Zhang, Q. Shimojo, S. (2006). Bare skin, blood and the evolution of primate colour vision. Biology Letters, 2, 217–221. [PubMed] [Article] [CrossRef] [PubMed]
Davidoff, J. B. Ostergaard, A. L. (1988). The role of colour in categorial judgements. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 40, 533–544. [PubMed] [CrossRef]
Delorme, A. Richard, G. Fabre-Thorpe, M. (2000). Ultra-rapid categorisation of natural scenes does not rely on colour cues: A study in monkeys and humans. Vision Research, 40, 2187–2200. [PubMed] [CrossRef] [PubMed]
Einhäuser, W. Mundhenk, T. N. Baldi, P. Koch, C. Itti, L. (2007). A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition. Journal of Vision, 7, (10):6, 1–13, http://journalofvision.org/7/10/6/, doi:10.1167/7.10.6. [PubMed] [Article] [CrossRef] [PubMed]
Evans, K. K. Treisman, A. (2005). Perception of objects in natural scenes: Is it really attention-free? Journal of Experimental Psychology: Human Perception and Performance, 31, 1476–1492. [PubMed] [CrossRef] [PubMed]
Fei-Fei, L. VanRullen, R. Koch, C. Perona, P. (2005). Why does natural scene categorization require little attention Exploring attentional requirements for natural and synthetic stimuli. Visual Cognition, 12, 893–924. [CrossRef]
Gegenfurtner, K. R. Rieger, J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10, 805–808. [PubMed] [Article] [CrossRef] [PubMed]
Goffaux, V. Jacques, C. Mouraux, A. Oliva, A. Schyns, P. G. Rossion, B. (2005). Diagnostic colours contribute to the early stages of scene categorization: Behavioural and neurophysiological evidence. Visual Cognition, 12, 878–892. [CrossRef]
Humphrey, G. K. Goodale, M. A. Jakobson, L. S. Servos, P. (1994). The role of surface information in object recognition: Studies of a visual form agnosic and normal subjects. Perception, 23, 1457–1481. [PubMed] [CrossRef] [PubMed]
Joseph, J. E. Proffitt, D. R. (1996). Semantic versus perceptual influences of color in object recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 407–429. [PubMed] [CrossRef] [PubMed]
Li, F. F. VanRullen, R. Koch, C. Perona, P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences of the United States of America, 99, 9596–9601. [PubMed] [Article] [CrossRef] [PubMed]
Lucas, P. W. Dominy, N. J. Riba-Hernandez, P. Stoner, K. E. Yamashita, N. Loría-Calderón, E. (2003). Evolution and function of routine trichromatic vision in primates. Evolution, 57, 2636–2643. [PubMed] [CrossRef] [PubMed]
Mapelli, D. Behrmann, M. (1997). The role of color in object recognition: Evidence from visual agnosia. Neurocase, 3, 237–247. [CrossRef]
Nijboer, T. C. Kanai, R. de Haan, E. H. van der Smagt, M. J. (2008). Recognising the forest, but not the trees: An effect of colour on scene perception and recognition. Consciousness and Cognition, 17, 741–752. [PubMed] [CrossRef] [PubMed]
Oliva, A. Schyns, P. G. (2000). Diagnostic colors mediate scene recognition. Cognitive Psychology, 41, 176–210. [PubMed] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Poirier, M. Saint-Aubin, J. (1995). Memory for related and unrelated words: Further evidence on the influence of semantic factors in immediate serial recall. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 48, 384–404. [PubMed] [CrossRef]
Price, C. J. Humphreys, G. W. (1989). The effects of surface detail on object categorization and naming. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 41, 797–827. [PubMed] [CrossRef]
Rossion, B. Pourtois, G. (2004). Revisiting Snodgrass and Vanderwart's object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33, 217–236. [PubMed] [CrossRef] [PubMed]
Rousselet, G. A. Fabre-Thorpe, M. Thorpe, S. J. (2002). Parallel processing in high-level categorization of natural images. Nature Neuroscience, 5, 629–630. [PubMed] [PubMed]
Rousselet, G. A. Thorpe, S. J. Fabre-Thorpe, M. (2004). Processing of one, two or four natural scenes in humans: The limits of parallelism. Vision Research, 44, 877–894. [PubMed] [CrossRef] [PubMed]
Spence, I. Wong, P. Rusan, M. Rastegar, N. (2006). Psychological Science,. [.
Stanislaw, H. Torodov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31, 137–149. [PubMed] [CrossRef]
Tanaka, J. W. Presnell, L. M. (1999). Perception & Psychophysics,. [.
Tanaka, J. Weiskopf, D. Williams, P. (2001). The role of color in high-level vision. Trends in Cognitive Sciences, 5, 211–215. [PubMed] [CrossRef] [PubMed]
Ullman, S. (1984). Visual routines. Cognition, 18, 97–159. [PubMed] [CrossRef] [PubMed]
Vogel, J. Schwaninger, A. Wallraven, C. Bülthoff, H. H. (2007). Categorization of natural scenes: Local versus global information and the role of color. ACM Transactions on Applied Perception, 4, 19. [CrossRef]
Walker, S. Stafford, P. Davis, G. (2008). Ultra-rapid categorization requires visual attention: Scenes with multiple foreground objects. Journal of Vision, 8, (4):21, 1–12, http://journalofvision.org/8/4/21/, doi:10.1167/8.4.21. [PubMed] [Article] [CrossRef] [PubMed]
Wichmann, F. A. Sharpe, L. T. Gegenfurtner, K. R. (2002). The contributions of color to recognition memory for natural scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 509–520. [PubMed] [CrossRef] [PubMed]
Wurm, L. H. Legge, G. E. Isenberg, L. M. Luebker, A. (1993). Color improves object recognition in normal and low-vision. Journal of Experimental Psychology: Human Perception and Performance, 19, 899–911. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Paradigm. Schematic depiction of the animal-recognition RSVP task. Forty images are presented for 50 ms each; the first half (20 images) are presented in grayscale and the second half in color or vice versa. Each half can contain 0 or 1 target (animal) frame, yielding 0, 1, or 2 targets per trial. Possible target frames range from 6 to 15 (for T1) and from 26 to 35 (for T2), ensuring at least 1 s between the targets in two-target trials, and at least 250 ms of frames in the same color before and after each target. In the example shown, target relatedness between T1 and T2 is “identical.”
Figure 1
 
Paradigm. Schematic depiction of the animal-recognition RSVP task. Forty images are presented for 50 ms each; the first half (20 images) are presented in grayscale and the second half in color or vice versa. Each half can contain 0 or 1 target (animal) frame, yielding 0, 1, or 2 targets per trial. Possible target frames range from 6 to 15 (for T1) and from 26 to 35 (for T2), ensuring at least 1 s between the targets in two-target trials, and at least 250 ms of frames in the same color before and after each target. In the example shown, target relatedness between T1 and T2 is “identical.”
Figure 2
 
Zero- and one-target trial performance. Receiver-operator characteristic (ROC) for zero- and one-target sequences split according to (A) color targets and (B) grayscale targets. False alarms are attributed according to reported color. Each data point denotes one individual. Chance performance is indicated by the dashed line. Only binary responses, but not confidence ratings were used for these plots. (C) ROC for color versus grayscale targets and associated AUC for observer 8. Each point corresponds to one confidence level, yielding 6 points per ROC. Curve was plotted by joining each point on the ROC linearly. (D) AUC comparison across target serial position on the left and target color on the right; mean and standard error over observers.
Figure 2
 
Zero- and one-target trial performance. Receiver-operator characteristic (ROC) for zero- and one-target sequences split according to (A) color targets and (B) grayscale targets. False alarms are attributed according to reported color. Each data point denotes one individual. Chance performance is indicated by the dashed line. Only binary responses, but not confidence ratings were used for these plots. (C) ROC for color versus grayscale targets and associated AUC for observer 8. Each point corresponds to one confidence level, yielding 6 points per ROC. Curve was plotted by joining each point on the ROC linearly. (D) AUC comparison across target serial position on the left and target color on the right; mean and standard error over observers.
Figure 3
 
Individual hits and categorical relatedness. Number of trials with 0, 1, or 2 hits for unrelated, related, and identical two-target sequences. Numbers provide mean and standard error across observers.
Figure 3
 
Individual hits and categorical relatedness. Number of trials with 0, 1, or 2 hits for unrelated, related, and identical two-target sequences. Numbers provide mean and standard error across observers.
Figure 4
 
Preference in two-target trials. (A, B) All two-target trials, preference for target serial position (panel A) and (B) target color (panel B). (C, D) Two-target trials with exactly one hit, preference for target serial position (C) and target color (D). (E, F) Percentage of one-hit trial with T2 (E) and color target (F) hits; percentages are normalized within observers and relatedness categories; dashed line indicates equal preference. In all panels, bars and error bars indicate mean and standard error over observers. Significance markers refer to two-way ANOVA main effects in each relatedness category (see text for details).
Figure 4
 
Preference in two-target trials. (A, B) All two-target trials, preference for target serial position (panel A) and (B) target color (panel B). (C, D) Two-target trials with exactly one hit, preference for target serial position (C) and target color (D). (E, F) Percentage of one-hit trial with T2 (E) and color target (F) hits; percentages are normalized within observers and relatedness categories; dashed line indicates equal preference. In all panels, bars and error bars indicate mean and standard error over observers. Significance markers refer to two-way ANOVA main effects in each relatedness category (see text for details).
Figure 5
 
Preference of individual observers in one-hit two-target trials. (A) Number of trials with T1 target hits versus T2 target hits for two-target sequences with two identical targets and 1 hit. (B) Number of trials with color target hits versus grayscale target hits for two-target sequences with two unrelated targets and 1 hit.
Figure 5
 
Preference of individual observers in one-hit two-target trials. (A) Number of trials with T1 target hits versus T2 target hits for two-target sequences with two identical targets and 1 hit. (B) Number of trials with color target hits versus grayscale target hits for two-target sequences with two unrelated targets and 1 hit.
Figure 6
 
Confidence. Fraction of one-hit trials with confidence rating “3” split by color ( x-axis) and grayscale targets ( y-axis); data points denote individuals, points above diagonal express higher confidence in grayscale, points below in color targets (A) one-target sequences (B) two-target sequences, marker denotes relatedness.
Figure 6
 
Confidence. Fraction of one-hit trials with confidence rating “3” split by color ( x-axis) and grayscale targets ( y-axis); data points denote individuals, points above diagonal express higher confidence in grayscale, points below in color targets (A) one-target sequences (B) two-target sequences, marker denotes relatedness.
Table 1
 
Relatedness levels of animal targets and occurrence frequencies enclosed in parentheses. “Related” target pairs were formed by randomly drawing two targets from a single entry. “Unrelated” target pairs were formed by randomly drawing two targets from two different entries, each of which could not share a common prefix (e.g. dog—domestic could not be paired with dog—wolf). Note that species were categorized prior to the experiment according to visual similarity, and do not necessarily reflect biological relation.
Table 1
 
Relatedness levels of animal targets and occurrence frequencies enclosed in parentheses. “Related” target pairs were formed by randomly drawing two targets from a single entry. “Unrelated” target pairs were formed by randomly drawing two targets from two different entries, each of which could not share a common prefix (e.g. dog—domestic could not be paired with dog—wolf). Note that species were categorized prior to the experiment according to visual similarity, and do not necessarily reflect biological relation.
Animal categories
dog—domestic (24) bear—brown (11) turtle (9)
dog—fox (25) bear—polar (20) rabbit (6)
dog—wolf (38) hippo (8)
marine—seal (5) horse (27)
deer—ram (18) marine—walrus (12) panda (4)
deer—caribou (23) marine—penguin (17) raccoon (12)
deer—gazelle (17) marine—whale (9) rhino (21)
deer—giraffe (3) marine—shark (7) bison (12)
deer—deer (13) kangaroo (4)
bug—caterpillar (3) koala (11)
bird—eagle (27) bug—butterfly (17) elephant (29)
bird—owl (15) bug—other (2) fish (10)
bird—flying (19) frog (3)
bird—nest (11) cat—domestic (9) snake (18)
bird—water (14) cat—leopard (24)
bird—other (56) cat—lion (10)
bird—turkey (5) cat—lynx (33)
cat—panther (4)
cat—tiger (30)
Table 2
 
Sensitivity measure d′ estimated from the zero- and one-target trials, split according to target serial position (T1 and T2) and target color (color and grayscale).
Table 2
 
Sensitivity measure d′ estimated from the zero- and one-target trials, split according to target serial position (T1 and T2) and target color (color and grayscale).
Observer T1 color T2 color T1 grayscale T2 grayscale
1 1.26 1.48 0.82 1.50
2 1.03 1.09 1.49 1.50
3 1.26 1.05 1.32 2.15
4 2.01 2.08 1.33 1.95
5 2.09 2.01 1.49 2.09
6 2.05 2.21 1.77 1.28
7 1.70 1.70 1.61 1.90
8 2.33 1.36 1.93 1.48
9 1.63 2.05 1.27 2.29
10 1.61 1.42 1.81 2.48
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×