Free
Article  |   December 2014
Rapid serial processing of natural scenes: Color modulates detection but neither recognition nor the attentional blink
Author Affiliations
Journal of Vision December 2014, Vol.14, 4. doi:https://doi.org/10.1167/14.14.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Svenja Marx, Onno Hansen-Goos, Michael Thrun, Wolfgang Einhäuser; Rapid serial processing of natural scenes: Color modulates detection but neither recognition nor the attentional blink. Journal of Vision 2014;14(14):4. https://doi.org/10.1167/14.14.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The exact function of color vision for natural-scene perception has remained puzzling. In rapid serial visual presentation (RSVP) tasks, categorically defined targets (e.g., animals) are detected typically slightly better for color than for grayscale stimuli. Here we test the effect of color on animal detection, recognition, and the attentional blink. We present color and grayscale RSVP sequences with up to two target images (animals) embedded. In some conditions, we modify either the hue or the intensity of each pixel. We confirm a benefit of color over grayscale images for animal detection over a range of stimulus onset asynchronies (SOAs), with improved hit rates from 50 to 120 ms and overall improved performance from 90 to 120 ms. For stimuli in which the hue is inverted, performance is similar to grayscale for small SOAs and indistinguishable from original color only for large SOAs. For subordinate category discrimination, color provides no additional benefit. Color and grayscale sequences show an attentional blink, but differences between color and grayscale are fully explained by single-target differences, ruling out the possibility that the color benefit is purely attentional.

Introduction
The primate visual system is remarkably fast in grasping the “gist” of a complex natural scene (Biederman, 1972; Potter & Levy, 1969). Although the exact definition of what constitutes such a gist has remained elusive, Fei-Fei, Iyer, Koch, and Perona (2007) have provided a working—albeit somewhat circular—definition as the “contents of a glance.” Experimental tests on the limits of perception within a glance frequently employ detection and/or recognition tasks. For example, observers are asked whether a scene contained a given high-level category (e.g., animal, means of transportation). When scenes are presented in isolation and without postmask, humans perform such tasks near ceiling for presentation durations as short as 20 ms (Thorpe, Fize, & Marlot, 1996). In that study, manual responses were given in under 300 ms, and the earliest category-dependent signal in the event-related potential (ERP) emerged as early as 150 ms after stimulus onset. In a later study that used a forced-choice saccade task, saccades had to be conducted to the hemifield where an animal had been briefly shown before, and some participants had reaction times as short as 120 ms (Kirchner & Thorpe, 2006). These findings are not restricted to animal targets, but are also valid for inanimate items (vehicles/no vehicles; VanRullen & Thorpe, 2001). Nonhuman primates show qualitatively similarly rapid categorization and are even somewhat faster than humans (Delorme, Richard, & Fabre-Thorpe, 2000; Fabre-Thorpe, Richard, & Thorpe, 1998). 
Complementary to asking for possible neural implementations (Thorpe, Delorme, & VanRullen, 2001; Thorpe & Gautrais, 1997) of rapid scene processing, two questions arise on a behavioral level: First, which features are responsible for rapid recognition, and second, how does rapid recognition relate to attention processes? Wichmann, Drewes, Rosas, and Gegenfurtner (2010) addressed the former question and particularly the role of the power spectrum in rapid animal detection. They found that a spectral cue eases animal detection without being causal. In another detailed analysis of the former question, Elder and Velisavljević (2009) investigated the role of several potential cues on visual processing in a rapid (30–120 ms) animal/no animal categorization task: two-dimensional boundary shape, texture, luminance, and color. They found that the fastest mechanisms relied on shape, while somewhat slower mechanisms integrated shapes with texture cues to become more robust. Color and luminance played virtually no role in this categorization task. Meng and Potter (2008) found similar results in an RSVP detection task with varying presentation durations (53, 107, 213, and 426 ms). Removing color information did not affect performance. In contrast, Delorme et al. (2010) investigated visual features for rapid (32-ms-presentation) animal categorization without postmask presentation in natural scenes and found a small but significant benefit of color in accuracy for responses later than 325 ms, while there was no benefit of color for the fastest responses. In addition to global image characteristics like luminance and color, they also tested the dependence of accuracy and reaction time on diagnostic animal features and target configuration. The most crucial features leading to high accuracy and speed turned out to be the presence of a typical animal posture and the area occupied by the animal (20%–30%). Wichmann, Braun, and Gegenfurtner (2006) reported an increase in performance of 2%–3% for colored as compared to grayscale pictures in a rapid animal/no animal categorization task. In monkeys and humans, color had a small but significant effect on reaction times when they had to detect food, but not when animals should be detected, and performance dropped slightly in some humans when color was removed (Delorme et al., 2000). The authors concluded that rapid identification may rely mainly on fast feed-forward processing of achromatic information in the magnocellular pathway. 
In a rapid serial visual presentation (RSVP) paradigm, Yao and Einhäuser (2008) found again little effect of color on the detection of a single animal target among natural-scene distractors, though color boosted observers' confidence. In contrast, when participants were presented with two animal targets that belonged to different subordinate categories (species) within the same stream, the colored target was preferentially reported. This suggests that color, though having little effect on initial detection, plays a role for retrieval from memory. Not only retrieval from but also encoding into memory is influenced by color (Gegenfurtner & Rieger, 2000; Spence, Wong, Rusan, & Rastegar, 2006; Wichmann, Sharpe, & Gegenfurtner, 2002). Gegenfurtner and Rieger (2000) showed that color helps recognition in two ways, through adding a cue in coding at an early level and adding a cue in retrieval at a later stage. Thus they differentiated between the early, sensory influence and the later, cognitive influence of color. Although the benefit of color in early visual processing is small, it depends on the natural-scene content. If color is diagnostic for certain natural scenes (e.g., sea), it speeds up recognition without affecting accuracy (Oliva & Schyns, 2000; Rousselet, Joubert, & Fabre-Thorpe, 2005) and thus in these cases mediates rapid scene recognition. Nonetheless, the questions remain how the sensory influence of color develops over time, whether it affects detection of a superordinate category (e.g., animal) and the recognition of subordinate categories (e.g., animal species) alike, and whether or not color yields attentional benefits. 
The role of attention in rapid visual processing of natural scenes has been the subject of many studies during recent years. When briefly peripherally flashed pictures had to be categorized into animals/no animals concurrently with an attentionally demanding central task, performance in both tasks did not drop as compared to single-task performance (Li, VanRullen, Koch, & Perona, 2002). Importantly, however, attention-demanding peripheral tasks like detecting a rotated L or T instead of detecting animals, led to a drop in performance, implying that animals can be detected even in the (near) absence of attention. This logic was later extended to specific subordinate classification tasks, such as gender discrimination (Reddy, Wilken, & Koch, 2004). Using a similar paradigm, Fei-Fei, VanRullen, Koch, and Perona (2005) found that grayscale pictures could also be processed very efficiently when attention was engaged elsewhere; furthermore, animal detection performance in the peripheral task was not impaired when a distractor image was shown in the periphery simultaneously with the target image at a location where either target or distractor could appear. This points out that early visual processing of natural scenes is not only nearly attention-free but also highly parallelized. This parallelization of early visual processing was also found by Rousselet, Fabre-Thorpe, and Thorpe (2002), who also used an animal/no animal categorization task showing either two pictures or one at the same time (left and/or right of central fixation) in a stream of pictures. Reaction times were the same in both conditions, and this was confirmed by category-related ERPs that emerged simultaneously (occipital: after 140 ms; frontal: after 160 ms) in both conditions and only differed after 190 ms. 
When two items appear within close temporal succession in an RSVP stream, frequently an attentional blink (AB) is observed: A second target (T2) is impaired when presented in a time window of 200–700 ms after a first target (T1). This decreased detection rate is usually absent if T2 immediately succeeds T1 (“lag-1-sparing”; Raymond, Shapiro, & Arnell, 1992). Initially, in these AB paradigms, artificial items were used (Chun & Potter, 1995; Raymond et al., 1992), but more recently a number of AB studies using natural scenes have been conducted. Evans and Treisman (2005) used the AB in their experiments 4 through 7 as a tool to test attentional effects on natural-scene processing. They presented a series of 12 natural scenes for 110 ms each, two of which contained targets. Here, target categories were animals and vehicles. When both targets had to be identified by giving a subordinate category, an AB was clearly measured and was more severe when targets were of different categories than when both targets were of the same category. There was also a subtle difference between categories, since animals were in general slightly better identified than vehicles. When both targets only had to be detected without identifying, the AB disappeared for targets of the same category and was only marginally present for sequences containing targets of different categories. Another study also found this dependency on stimulus category in an AB paradigm using natural scenes, where target categories were faces and watches (Einhäuser, Koch, & Makeig, 2007). Target identification was better and the AB was shorter for faces than for watches. Since the function of color vision in human beings and monkeys is frequently associated with attentional processes (Frey, Honey, & König, 2008; Maunsell & Treue, 2006; Motter, 1994; Zhang & Luck, 2009), the question arises whether color has an impact on the timing and depth of the AB. 
To investigate the role of color in rapid visual processing and in particular its relation to attention, we conducted four RSVP experiments with animals as the target category. In the first experiment, observers in each trial had to report whether there were zero, one, or two animal targets in a 2-s stream, followed by a four-alternative forced-choice subordinate classification. Streams could be either colored or grayscale. This allowed us to replicate the small but frequently significant benefit of color for single-target processing and to characterize the dependence of subordinate classification on color and the modulation of the AB by color. In the second experiment, we asked whether the observed color benefits were a consequence of color being diagnostic for animals. To this end, we inverted the hue of each pixel (roughly: red-green, blue-yellow, etc.) while keeping saturation and luminance constant. In the third experiment, we tested whether the effects of color remained the same when stimulus presentation duration was decreased to 50 ms, using the same stimuli as in Experiment 2. And in the fourth experiment, we tested the dependence of color on six different presentation durations (which were also the stimulus onset asynchronies, SOAs) to test how the dependence on color develops over time. 
Methods
In total we conducted four experiments. Experiment 1 targeted the effect of natural color in images on detection, recognition, and the time course of the attentional blink. Experiment 2 aimed at dissociating the effects of color that result from color's diagnosticity for animal images from those that result from other color-related effects. Experiment 3 investigated whether the results of Experiment 2 held for shorter SOAs. Experiment 4 analyzed the detection and recognition of single targets for a larger variety of SOAs. 
Stimuli
A total of 480 animal target stimuli were used from the COREL data set: animals, vehicles, and distractors (http://vision.stanford.edu/resources_links.html; Li et al., 2002). For subordinate classification, animal images were subdivided into canine (including wolves, foxes, and dogs), feline (including tigers, pumas, and leopards), avian (including all kinds of birds), and ungulate (including horses, deer, cows, and goats), with 120 of each (Figure 1A). Distractor images were taken from the same database. In Experiments 1–3, the same subset of 360 target stimuli (90 per category) was used; in Experiment 4, all 480 were used. Stimuli were 384 × 256 pixels in size. We used four conditions, which we refer to as “original color,” “color inverted,” “grayscale,” and “gray inverted.” To modify stimuli, they were first transformed in the physiologically defined DKL color space (Derrington, Krauskopf, & Lennie, 1984). DKL color space is a three-dimensional space, in which the z-axis defines the luminance (for convenience, we map the minimal displayable luminance to −0.5 and the maximal displayable luminance to +0.5) and the other axes are spanned by the differential cone excitations: the difference between L and M cones (L − M axis) and the difference between the sum of L and M cones and S cones (S − (L + M) axis). 
Figure 1
 
Stimuli and procedure. (a) Example stimuli of all four target categories (feline, canine, ungulate, avian) and two example distractor images. Image modifications: (b) grayscale, (c) color inverted, (d) gray inverted. (e) Procedure. Depicted times correspond to Experiments 1 and 2; in all experiments, targets occurred between serial positions 6 and 15.
Figure 1
 
Stimuli and procedure. (a) Example stimuli of all four target categories (feline, canine, ungulate, avian) and two example distractor images. Image modifications: (b) grayscale, (c) color inverted, (d) gray inverted. (e) Procedure. Depicted times correspond to Experiments 1 and 2; in all experiments, targets occurred between serial positions 6 and 15.
The original image (Figure 1A) was kept unchanged. For the grayscale condition (Figure 1B), saturation in the DKL space was set to 0 (i.e., each pixel was projected on the luminance axis). For the color-inverted condition (Figure 1C), the DKL space was rotated by 180°, which results in a mapping of each hue to its opponent hue without any change in saturation or luminance. Since it is not guaranteed that the modified image can be displayed within the screen's gamut, we applied the following procedure to keep luminance and inverted hue as unaffected as possible. After the hue inversion, we determined for each pixel the maximal chroma (distance from the luminance axis in DKL space) the screen could display for the given hue. If the chroma of the pixel was at or below this maximum, the pixel remained unchanged. If the chroma of the pixel was above this maximum, it was reduced to this maximally displayable value while keeping luminance and hue unchanged (i.e., both cardinal color axes were scaled by the same factor, while the luminance axis was not scaled). On average, 7.2% ± 9.9 pixels were affected by such a reduction, and on average the scaling factor for the axes was 0.98. That is, the reduction in chroma was small and affected only a small number of pixels. For the gray-inverted condition (Figure 1D), the luminance values were inverted as follows: Luminance was mapped to the interval [0, 1] (i.e., 0.5 was added to the luminance axis of the DKL space), the square root of the resulting values was subtracted from 1, that result was squared, and it was then mapped back to [−0.5, 0.5] by subtracting 0.5. 
Procedure
In Experiments 1–3, observers viewed streams of 20 natural scenes that contained either no target, one target, or two targets (Figure 1E). In Experiment 4, all streams contained either no target or one target. All images in the stream (target and distractors) were subjected to the same color conditions. Observers were asked to fixate the center of the screen and press and release a button to start each trial. After viewing each stream, observers were first asked how many animals they had seen in the preceding stream. Then they were asked to choose the animal class (if they had responded “one”) or classes in order (if they had responded “two”; Experiments 1–3) among the set of four options (feline, canine, avian, ungulate). The number of queries depended on the response, not on ground truth. That is, even if a detection was a false alarm, observers had to respond which category they had recognized, and they were not prompted for categorization if they had not detected any target. 
In Experiment 1, the SOA was 100 ms and only grayscale and original-color conditions were used. For each color condition (grayscale, original), Experiment 1 included 240 streams with zero targets, 240 streams with one target, and 240 streams with two targets, 48 for each tested lag (one, two, three, four, and seven frames). This yielded a total of 1,440 (2 × 3 × 5 × 48) trials. The order of trials was randomized and the experiment split in two sessions of about equal length. In Experiment 2, the SOA was also 100 ms, but each stream was presented in all four color conditions, with 120 streams of zero targets, 120 streams with one target, and 120 streams with two targets (all at lag-2) for each condition, again yielding a total of 1,440 (4 × 3 × 120) trials that were split in two sessions of about equal length. In Experiment 3, the SOA was 50 ms and the experiment was otherwise identical to Experiment 2. In Experiment 4, six different SOAs were used: the 50 and 100 ms of the previous experiments as well as 30, 60, 90, and 120 ms. Each stream was presented in all four color conditions with 80 streams of zero targets and 80 streams with one target per color condition and per SOA, yielding 2 × 4 × 6 × 80 = 3,840 trials. They were split in three sessions of about equal length. In Experiment 1, each of the 360 target stimuli was used twice per condition (in different streams of distractors); in Experiments 2 and 3, each of the 360 target stimuli was used once per condition; and in Experiment 4, each of the 480 target stimuli was used once per color condition. 
Setup
The study was conducted in a dark and sound-isolated room. Stimuli were presented on a 19.7-in. EIZO Flex Scan F77S CRT monitor set to 1024 × 768 pixel resolution at 100 Hz, located at 73 cm distant from the observer, whose head was stabilized with a chin rest and a forehead rest. The maximum luminance (white) was 66.0 cd/m2, the minimum luminance (black) was 0.11 cd/m2, and the CIE color coordinates of the monitor's guns (x, y) were (0.623, 0.344) for red, (0.287, 0.609) for green, and (0.151, 0.065) for blue. Stimuli spanned 11.6° × 7.8° on a gray background. Before each trial, a gray fixation screen with a black fixation cross was presented. 
All stimulus preparation, presentation, and data analysis used Matlab (Mathworks, Natick, MA). For presentation, the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) and Eyelink Toolbox (Cornelissen, Peters, & Palmer, 2002), as retrieved from http://www.psychtoolbox.org, were used with Matlab. 
Participants
Thirty-two volunteers participated in the study: eight (six female, 24.8 ± 3.3 years) in Experiment 1, eight (six female, 26.5 ± 2.8 years) in Experiment 2, eight in Experiment 3 (two female, 26.6 ± 5.8 years), and eight in Experiment 4 (six female, 25.1 ± 2.9 years). They were paid for participation, and experiments conformed with the Declaration of Helsinki and were approved by the local ethics committee (Ethikkommission FB04). 
Analysis
Since the design for all experiments was “within-subject” for all variables of interest, all analyses treated observers as repeated measures. For analyses of more than one factor or more than two levels per factor, a repeated-measures ANOVA was used. For post hoc pairwise comparisons and for factors with two levels, paired t tests were used. 
Two types of analysis have to be distinguished, hereafter “detection” and “recognition.” Detection refers to the question whether the number of targets the observer reported corresponded to the number of targets present in the stream. We tested results for zero-target, one-target, and two-target trials separately, and refer to the relevant variables by standard signal-detection-theory terms. 
For the first part of analysis in all experiments, we considered only single-target and no-target trials. For zero-target streams, we defined the report of any target (one or two) as a false alarm. For single-target streams there are two possible errors: the report of no target or the report of two targets. Although the latter was rare (see Appendix A for each individual's 3 × 3 matrix of all possible truth/response combinations), we performed the analysis for both definitions: at least one target reported and exactly one target reported. For the computation of d′ (computed as the difference between the z-scored hit and false-alarm rates; Macmillan, 1993), we used the former definition. In Experiment 4, only zero or one target was possible, so that the hit and false-alarm rates are unambiguously defined. 
In recognition, we asked whether the target was correctly identified according to the four available categories. Most analysis is based on “recognition given detection,” that is, refers only to trials in which the target or targets were detected. In one-target streams for which two targets were reported, the target was counted as recognized if at least one of the two responses matched the target category. When analyzing recognition for two-target streams in which exactly one target was detected and T1 was of the same category as T2, it is impossible to infer from the response whether T1 or T2 was recognized (as both require the same response). For this particular analysis, we therefore excluded trials for which T1 and T2 were from the same category. 
Results
Detection of single targets
For a first analysis, we consider zero-target and single-target trials (Figure 2; Tables 1 and 2). In Experiments 1 and 2, with an SOA of 100 ms, color sequences had more hits (Figure 2A; Appendix B) and fewer false alarms (Figure 2B; Appendix B) than grayscale sequences. The difference in hit rates was in the typically observed range, no matter whether hits were defined as response 1 or 2 in one-target trials (Experiment 1: 4.1% ± 3.8%; Experiment 2: 3.1% ± 2.3; all data mean ± SD) or as response of exactly 1 (Experiment 1: 4.8% ± 3.0%; Experiment 2: 4.0% ± 2.2%). Qualitatively, the same held for Experiment 3 (SOA: 50 ms), with a difference of 4.6% ± 4.4% (response > 0) or 5.1% ± 3.8% (response = 1) between color and grayscale. In Experiment 4, where hit rates are well defined, as only responses 0 and 1 are possible, color had higher hit rates than grayscale for all SOAs (Figure 2A, right). This qualitative benefit for color is also reflected in the separability (d′), which combines hits and false alarms into a measure of performance (Table 2; Figure 2C): The value of d′ for color sequences is larger than for grayscale sequences across all conditions (Figure 3). In Experiments 2–4 we had two additional conditions: color inverted and gray inverted. The gray-inverted images have fewer hits, more false alarms, and consequently a smaller d′ for all conditions tested (Figures 2 and 3, gray). The color-inverted condition (Figures 2 and 3, red) shows a more mixed pattern: For small SOAs (Experiment 3 and the short SOAs of Experiment 4) it tends to be close to the grayscale condition, while for larger SOAs (Experiment 2 and the long SOAs of Experiment 4) it tends to be close to the color condition. 
Figure 2
 
Detection in zero-target and single-target trials. (A) Hit rate for Experiments 1–3 (left, sorted by SOA) and the different SOAs of Experiment 4. Different colors code different conditions (blue: original color; red: color inverted; black: grayscale; gray: gray inverted). The left panel defines hits as any response to single-target trials (response 1 or 2), the middle panel as an exact response (response 1). For Experiment 4, there was no two-target option. (B) False alarms. Notation as in (A). Left panel: any false alarm (response 1 or 2); middle panel: single false alarm (response 1). (C) Value of d′ as computed from z-scored hit and false-alarm rates of (A) and (B); the “ >0” definition of hits and false alarms is used for this computation. Error bars are mean and standard error of the mean across observers.
Figure 2
 
Detection in zero-target and single-target trials. (A) Hit rate for Experiments 1–3 (left, sorted by SOA) and the different SOAs of Experiment 4. Different colors code different conditions (blue: original color; red: color inverted; black: grayscale; gray: gray inverted). The left panel defines hits as any response to single-target trials (response 1 or 2), the middle panel as an exact response (response 1). For Experiment 4, there was no two-target option. (B) False alarms. Notation as in (A). Left panel: any false alarm (response 1 or 2); middle panel: single false alarm (response 1). (C) Value of d′ as computed from z-scored hit and false-alarm rates of (A) and (B); the “ >0” definition of hits and false alarms is used for this computation. Error bars are mean and standard error of the mean across observers.
Figure 3
 
Value of d′ difference to grayscale. Difference of the different color conditions to grayscale. Mean and standard error of the mean across observers.
Figure 3
 
Value of d′ difference to grayscale. Difference of the different color conditions to grayscale. Mean and standard error of the mean across observers.
Table 1
 
ANOVAs for effect of color condition (Experiments 2–4) and SOA (Experiment 4) for hits, false alarms, and d′. In Experiments 2 and 3, two different definitions of hits are tested: response to a one-target trial of at least 1 (“ > 0”) or exactly 1 (“ = 1”). Bold type indicates a significant effect.
Table 1
 
ANOVAs for effect of color condition (Experiments 2–4) and SOA (Experiment 4) for hits, false alarms, and d′. In Experiments 2 and 3, two different definitions of hits are tested: response to a one-target trial of at least 1 (“ > 0”) or exactly 1 (“ = 1”). Bold type indicates a significant effect.
SOA Effect of Hits False alarms d′
Experiment 2 100 ms Condition F(3, 21) = 15.7 F(3, 21) = 4.57 F(3, 21) = 35.77
p= 1.38 × 10−5 p= 0.013 p= 1.95 × 10−8
(response > 0)
F(3, 21) = 42.05
p= 4.7 × 10−9
(response = 1)
Experiment 3 50 ms Condition F(3, 21) = 11.6 F(3, 21) = 4.23 F(3, 21) = 7.26
p= 1.07 × 10−4 p= 0.017 p= 0.0016
(response > 0)
F(3, 21) = 17.68
p= 5.8 × 10−6
(response = 1)
Experiment 4 All SOA F(5, 35) = 128.37 F(5, 35) = 0.78 F(5, 35) = 87.10
p< 10−20 p = 0.57 p< 10−20
Condition F(3, 21) = 69.89 F(3, 21) = 6.65 F(3, 21) = 72.65
p= 4.28 × 10−11 p= 0.0025 p= 2.96 × 10−11
SOA × Condition F(15, 105) = 1.30 F(15, 105) = 0.81 F(15, 105) = 1.51
p = 0.22 p = 0.66 p = 0.11
Table 2
 
Values of d′ and post hoc comparisons of interest. Bold type indicates a significant effect; all d′ values are mean ± standard deviation.
Table 2
 
Values of d′ and post hoc comparisons of interest. Bold type indicates a significant effect; all d′ values are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Gray vs. color Gray vs. color inverted Color inverted vs. original color
Experiment 1
 100 ms 3.09 ± 0.54 3.52 ± 0.66 t(7) = 2.61
p= 0.035
Experiment 2
 100 ms 2.52 ± 0.40 3.11 ± 0.46 3.36 ± 0.45 3.37 ± 0.49 t(7) = 2.81 t(7) = 3.30 t(7) = 0.096
p= 0.026 p= 0.013 p = 0.93
Experiment 3
 50 ms 1.29 ± 0.16 1.67 ± 0.32 1.70 ± 0.41 1.78 ± 0.19 t(7) = 1.07 t(7) = 0.23 t(7) = 0.62
p = 0.32 p = 0.83 p = 0.56
Experiment 4
 30 ms 0.90 ± 0.24 1.54 ± 0.31 1.48 ± 0.34 1.58 ± 0.31 t(7) = 0.46 t(7) = 0.39 t(7) = 0.63
p = 0.66 p = 0.71 p = 0.55
 50 ms 1.56 ± 0.50 2.04 ± 0.33 1.94 ± 0.32 2.19 ± 0.29 t(7) = 1.30 t(7) = 0.80 t(7) = 2.10
p = 0.23 p = 0.45 p = 0.074
 60 ms 1.88 ± 0.39 2.17 ± 0.56 2.21 ± 0.38 2.39 ± 0.50 t(7) = 1.29 t(7) = 0.22 t(7) = 2.08
p = 0.24 p = 0.83 p = 0.076
 90 ms 2.31 ± 0.49 2.93 ± 0.38 2.93 ± 0.53 3.31 ± 0.33 t(7) = 3.34 t(7) = 0.0019 t(7) = 3.81
p= 0.012 p = 1.0 p= 0.0066
 100 ms 2.42 ± 0.34 3.01 ± 0.40 2.92 ± 0.44 3.45 ± 0.44 t(7) = 3.16 t(7) = 0.67 t(7) = 4.54
p= 0.016 p = 0.52 p= 0.0027
 120 ms 2.58 ± 0.53 3.24 ± 0.39 3.51 ± 0.53 3.53 ± 0.36 t(7) = 2.41 t(7) = 1.91 t(7) = 0.16
p= 0.047 p = 0.098 p = 0.88
To quantify these effects statistically, for the experiments with more than two conditions (2–4) we first tested whether the factor color condition had an effect at all by means of a repeated-measures ANOVA (in Experiment 4 with SOA as an additional factor). For hits (in either definition), false alarms, and d′ we find main effects of condition in all experiments (Table 1). In Experiment 4, we additionally find a main effect of SOA for hits and d′ (though not for false alarms), but no interaction between condition and SOA (Table 1). This allowed us to perform post hoc tests for all experiments and each SOA level in Experiment 4, as to which color conditions differ from each other in terms of hits, false alarms, and/or d′. In the remainder of the main text we will focus on d′; hit and false-alarm data are analyzed in Appendix B
When considering d′ as a performance measure that combines over hits and false alarms and is thus insensitive to subjective criteria, the difference between color and grayscale sequences increases monotonically up to 100 ms (Figure 3) and becomes significant at 90 ms and above (Table 2). This indicates that there is a benefit induced by color that increases with increasing SOA, at least up to 100 ms. 
To address whether the performance benefit derives from color being diagnostic for animal scenes, we included the color-inverted images in Experiments 2–4. For SOAs of 90 and 100 ms (Experiment 4), where color already excels over grayscale, the color-inverted sequences yield significantly worse performance than the original color sequences (Figure 3; Table 2). Only for the longest SOAs (100 ms in Experiment 2 and 120 ms in Experiment 4), color-inverted sequences yield (or trend to yield) better performance than grayscale and become indistinguishable from original color. 
Performance in the gray-inverted condition is—with the exception of an SOA of 60 ms in Experiment 4, where it is indistinguishable from grayscale and color inverted—significantly worse than in any other condition (all t(7) > 3.02, all p < 0.02). As the target is clearly identifiable in these images if viewing time is infinite, the gray-inverted condition verifies that even at the largest SOAs tested, detection is not yet trivial (i.e., it is not equivalent to prolonged viewing). 
In sum, the benefit of color for detection increases with increasing SOA (Figure 3, blue), but only at large SOAs can a similar benefit be observed for color-inverted images (Figure 3, red). This suggests that at short SOAs, the color benefit results from mechanisms that require the correct hue (e.g., the hue being diagnostic for target images), while for longer SOAs other mechanisms, which only require color contrasts to be intact, may come into play. 
Color and grayscale targets both induce an attentional blink
In Experiment 1, we tested two-target streams at a variety of lags (1, 2, 3, 4, 7) between targets. When analyzing color and grayscale sequences separately (Appendix C), we find reduced performance for short lags, the attentional blink (AB). Performance is worst at lag 1; that is, we do not observe lag-1 sparing (Figure 4A). This absence of lag-1 sparing also holds when only trials with T1 and T2 from the same category are considered, ruling out the possibility that it results from the dissimilarity between categories. 
Figure 4
 
Attentional blink. (A–C) Detection rate for both targets in the two-target sequences; the dashed line indicates the baseline (squared detection rate of the single-target sequences) (A) at different lags in Experiment 1, (B) at lag 2 (200 ms) in Experiment 2, and (C) at lag 2 (100 ms) in Experiment 3. (D–F) Baseline-corrected detection rate in two-target sequences in (D) Experiment 1, (E) Experiment 2, and (F) Experiment 3.
Figure 4
 
Attentional blink. (A–C) Detection rate for both targets in the two-target sequences; the dashed line indicates the baseline (squared detection rate of the single-target sequences) (A) at different lags in Experiment 1, (B) at lag 2 (200 ms) in Experiment 2, and (C) at lag 2 (100 ms) in Experiment 3. (D–F) Baseline-corrected detection rate in two-target sequences in (D) Experiment 1, (E) Experiment 2, and (F) Experiment 3.
If the detection of one target in a two-target stream were independent from the detection of the other, the probability of detecting both targets would equal the square of the single-target hit rate. Using this baseline, we find a significant AB at lag 2 in Experiment 2 (Figure 4B) and Experiment 3 (Figure 4C) for all color conditions (Appendix C). Hence, there is an attentional blink (without lag-1 sparing) for lags 1, 2, and 3 for any color condition and for short (50 ms) and long (100 ms) SOAs. 
Any effect of color on the attentional blink is explained by single-target performance alone
When testing the two-target detection rate at each lag, there seems an apparent effect of color: Detection performance is better for color conditions at all lags (all ts > 2.7, all ps < 0.03; Figure 4A). Similarly, there is a main effect of color condition on the two-target hit rate in Experiment 2, F(3, 21) = 59.27, p = 2.0 × 10−10 (Figure 4B), and in Experiment 3, F(3, 21) = 3.96, p = 0.022 (Figure 4C). This raises the question whether there is an attentional benefit of color or whether this effect can solely be explained by differences in single-target performance. To answer this question, we subtracted the baseline, defined by the individual's squared single-target hit rate in the respective color condition, from the plain two-target hit rate. For Experiment 1, we find that these baseline-corrected data do not show differences between color and grayscale at any lag (all ts < 0.69, all ps > 0.21; Figure 4D). Similarly, there is no main effect of color condition on two-target detection performance after baseline correction in Experiment 2, F(3, 21) = 1.18, p = 0.34 (Figure 4E) or Experiment 3, F(3 ,21) = 1.14, p = 0.36 (Figure 4F). Consequently, while we find an attentional blink for two-target detection in any color condition, we do not observe any effect of color in addition to what single-target detection performance had predicted. 
Color effects in single-target recognition are explained by detection performance
Besides the mere detection of animals in a sequence of distractors, we also tested the capability of observers to identify the subordinate category. Of all single-target trials in Experiment 1, the subordinate animal category is correctly identified in 85.8% ± 6.1% of grayscale and 89.3% ± 4.7% of color sequences, with a significant benefit of color, t(7) = 3.30, p = 0.01. Similarly, there is a significant effect of color condition on recognition in Experiment 2, F(3, 21) = 56.95, p = 2.9 × 10−10, and in Experiment 3, F(3, 21) = 26.79, p = 2.26 × 10−7. In Experiment 4, a 6 × 4 repeated-measures ANOVA reveals a main effect of SOA, F(5, 35) = 265.31, p < 10−20, and of color condition, F(3, 21) = 140.37, p = 4.72 × 10−14, but no interaction, F(15, 105) = 1.71, p = 0.059. In line with the absence of an interaction, color condition has an effect on recognition at each SOA (Fs > 18.67, ps < 3.90 × 10−6), and in turn, SOAs have an effect on recognition in all color conditions (Fs > 118.3, ps < 10−20). 
This analysis considered recognition unconditionally; that is, it compared raw recognition rates independent of whether the target was at all detected. However, when only considering single-target trials in which the target is correctly detected (recognition given detection), subordinate recognition is indistinguishable between grayscale and color (Figure 5). In Experiment 1, 94.8% ± 3.2% of grayscale and 94.4% ± 3.4% of color sequences in which a target is correctly detected have the target also correctly identified, t(7) = 0.48, p = 0.65. For Experiments 2 and 3, there are still main effects of color condition on this recognition-given-detection performance—Experiment 2: F(3, 21) = 27.4, p = 1.9 × 10−7; Experiment 3: F(3, 21) = 16.37, p = 1.02 × 10−5—but this is solely explained by the difference between the gray-inverted category and all other categories (Appendix D; Figure 5). For the recognition-given-detection analysis in Experiment 4, there is a main effect of SOA, F(5, 35) = 26.93, p = 4.46 × 10−11, and of color, F(3, 21) = 62.54, p = 1.22 × 10−10, but no interaction, F(15, 105) = 1.0, p = 0.46. This main effect of color also results almost entirely from the difference between gray-inverted and all other conditions (Appendix D; Figure 5). In general, once animal detection has been successful, color has no additional effect on subordinate animal categorization. In contrast, the polarity of luminance has an effect. 
Figure 5
 
Recognition given detection. Number of single-target trials for which the correct category was reported divided by the number of single-target trials in which at least one target was reported (hits). Colors as in Figure 2; bars denote standard error of the mean across observers.
Figure 5
 
Recognition given detection. Number of single-target trials for which the correct category was reported divided by the number of single-target trials in which at least one target was reported (hits). Colors as in Figure 2; bars denote standard error of the mean across observers.
Color effects in two-target recognition performance are explained by detection performance
For the data of Experiment 1, we tested whether lag and/or color has an effect on recognition performance in trials in which two targets are correctly detected by means of a two-way (2 color conditions × 5 lags) repeated-measures ANOVA. We find a main effect of lag on the probability that both targets were correctly identified, F(4, 28) = 4.62, p = 0.0055, but no effect of color condition, F(1, 28) = 0.0017, p = 0.97, and no interaction, F(4, 28) = 1.04, p = 0.40. This shows that there is an attentional blink for recognition on top of that for detection, but no additional effect of color. Once detection partially fails, subsequent recognition is insensitive to color or the attentional blink, which also holds for Experiments 2 and 3 (Appendix E). 
Discussion
The present study shows that color is beneficial for rapid scene perception (“color benefit”). Targets in rapidly presented sequences of images are slightly easier to detect when sequences are in color as compared to grayscale, which is qualitatively and quantitatively in line with several earlier reports (Delorme et al., 2010; Wichmann et al., 2006). We find that this sensory color benefit increases monotonically with presentation time up to about 100 ms. For short presentation times, the color benefit requires the hue to be intact, pointing out that color being diagnostic for images containing an animal may be the dominant effect driving the color benefit for short SOAs. For longer SOAs, hue-modified images tend to approach original-color performance, suggesting that a general benefit of color as such, possibly related to a segmentation process, comes into play. Color does not aid performance in naming subordinate animal categories, provided detection of the category “animal” was successful. Finally, color has no influence on the characteristic of the attentional blink beyond the effects explained by single-target trials alone. Together, these results suggest a preattentive, rather than attentional, source of the color benefit. 
While some previous studies have also found an effect of color on performance in rapid detection (Delorme et al., 2010; Wichmann et al., 2006), others have not (Elder & Velisavljević, 2009; Meng & Potter, 2008). In an RSVP paradigm, Meng and Potter (2008) instructed participants about the target with short descriptions of the scene to expect and found no effect of color on the detection of the scene for a wide range of SOAs (53–426 ms) and for normal and impoverished viewing conditions alike. One possible explanation for the absence of an effect could be that color might be beneficial only for broad categories (like animals), not for more detailed descriptions, especially if they implicate a spatial relation (like the “businessmen at table” example in Meng & Potter, 2008). In contrast to the present study and Meng and Potter's, Elder and Velisavljević (2009)—who did not find an effect of color—did not use sequences of images but instead used masked presentations of isolated images. Depending on the exact design of the mask, the colored mask might be relatively more effective than a grayscale mask as compared to the difference between the temporally adjacent color or grayscale frames in RSVP. Whether the color benefit extends from animals to other categories, whether it depends on the richness of the instruction, whether it depends on whether the instructions imply spatial relations, and whether there is a fundamental difference between RSVP and isolated masked images are interesting issues for further research. 
When detecting targets in complex backgrounds, the separation of figure from ground is an important role for segmentation processes. The interpretation that color facilitates figure–ground segmentation has been proposed by other studies, suggesting this mechanism as an early contribution of color to visual processing (Gegenfurtner & Rieger, 2000; Skiera, Petersen, Skalej, & Fahle, 2000). Wurm, Legge, Isenberg, and Luebker (1993) found that color improved accuracy in recognition of food targets irrespective of its diagnosticity of the target object, which points to a rapid low-level contribution of color to object recognition. This early contribution of color in visual processing and particularly in figure–ground segmentation has also been shown in an fMRI study where activity related to figure–ground segmentation in checkerboards by color, luminance, and motion stimuli was already found in the primary visual cortex (Skiera et al., 2000). For long SOAs, performance in color-inverted sequences trends more towards original-color performance than towards grayscale performance (Figure 3). Since segmentation in natural scenes benefits from chromatic boundaries (Hansen & Gegenfurtner, 2009), which are unaffected by our hue inversion, it is conceivable that for long SOAs, color aids detection by fostering segmentation independent of hue being diagnostic for the target category. 
Although we focus on sensory aspects and find an effect of color on detection but not on recognition or attention, our results do not contradict the notion that color also plays a prominent role in later stages of visual processing. It has already been proposed that color aids visual processing not primarily during detection but in the later stages, when—for example—memory has to be accessed (Yao & Einhäuser, 2008). This prominent role of color in encoding and retrieval in recognition memory paradigms has been shown in several studies (Gegenfurtner & Rieger, 2000; Spence et al., 2006; Wichmann et al., 2002) and has typically exceeded the comparably subtle effect in rapid visual categorization tasks (Delorme et al., 2000, 2010; Wichmann et al., 2006). 
Unlike the original characterization of the AB (Raymond et al., 1992), we do not observe lag-1 sparing in our Experiment 1 for color nor for grayscale sequences. Visser, Zuvic, Bischof, and Di Lollo (1999) demonstrated that lag-1 sparing occurred only when T1 and T2 were at the same location, a condition that may be violated in complex scenes where the target does not fill the full image. In addition, lag-1 sparing decreases for lower similarity between T1 and T2 (Visser, Davis, & Ohan, 2009). On a basic feature level, animal targets can be rather dissimilar, and the subordinate categorical similarity does not seem to be of relevance for lag-1 sparing in our experiment (detection at lag 1 was virtually identical, no matter if T1 and T2 were of the same or different categories). Finally, there are a number of other conditions under which lag-1 sparing is not observed, for example, when no short-term consolidation takes place (Dell' Acqua, Jolicœur, Pascali, & Pluchino, 2007) or when T1 is masked (Martin & Shapiro, 2008). So while lag-1 sparing is widely considered a hallmark of the AB, there are AB conditions in which no lag-1 sparing is observed, and therefore lag-1 sparing is not necessarily considered indicative of an AB effect (MacLean & Arnell, 2012). 
In our paradigm, responses were unspeeded and we did not measure reaction times. Hence, we cannot fully rule out the possibility that for some targets, especially at short SOAs, color could have sped up the responses without affecting accuracy, as has been reported earlier (Oliva & Schyns, 2000; Rousselet et al., 2005). Since decreased reaction times are associated with increased confidence (Henmon, 1911), such speeding up could, however, also be related to increased subjective confidence for color as compared to grayscale sequences, which has been reported earlier even in the absence of a performance difference (Yao & Einhäuser, 2008). It should be noted, however, that our results of increased hit rates cannot be explained by a shift in criteria towards more liberal responses, since false-alarm rates were indistinguishable between conditions. 
Figure 6
 
Raw data, hit rates, and false-alarm rates. All hit and false-alarm rates for all experiments, conditions, and SOAs. Within each experiment, the same color denotes the same observer. Responses 1 and 2 are counted as false alarms and hits for this representation. If data points were exactly overlapping, the covered data point was moved horizontally out of the axes and connected to its original location with a thin line.
Figure 6
 
Raw data, hit rates, and false-alarm rates. All hit and false-alarm rates for all experiments, conditions, and SOAs. Within each experiment, the same color denotes the same observer. Responses 1 and 2 are counted as false alarms and hits for this representation. If data points were exactly overlapping, the covered data point was moved horizontally out of the axes and connected to its original location with a thin line.
Whether detection and recognition are based on the same underlying process is a matter of debate. Grill-Spector and Kanwisher (2005) found the same performance and reaction times in detection and basic-level categorization tasks and concluded that figure–ground segmentation leading to detection and basic-level categorization are closely linked and mediated by one mechanism. In turn, this hypothesis has been challenged by subsequent studies that found better performance in detection than in categorization in basic-level categorization tasks (Bowers & Jones, 2008; Mack, Gauthier, Sadr, & Palmeri, 2008; Mack & Palmeri, 2010). It has been shown that both mechanisms can be selectively manipulated (Mack et al., 2008; Mack & Palmeri, 2010), and thus there is no intrinsic link between them. Here we find—consistent over all experiments and SOAs—that color has little influence on recognition, once the target has been detected. In contrast, the gray-inverted condition shows that luminance information influences recognition for detected targets. Since there are more false alarms for the gray-inverted than for any other condition, this result could still be explained by an increased number of guesses within the population of hits and thus decreased recognition performance. However, a similar effect is observed in all conditions of Experiment 4 when decreasing SOAs, which does not affect false-alarm rates. Decreasing SOAs not only reduces performance (in terms of d′) but also further reduces the fraction of correctly recognized targets among the correctly detected ones. This argues against entirely overlapping mechanisms for detection and recognition. Nonetheless, even for short SOAs and the gray-inverted condition, recognition for detected targets is clearly above chance (>60%, with chance level at 25%). This offers an alternative explanation that, at least for high performance at larger SOAs, might contribute to the strong coupling between detection and recognition: For difficult targets (those only detected at large SOAs), the report of a detection may depend on some subordinate recognition. This view is supported by the conservative criterion nearly all observers apply for detection under difficult conditions (Figure 6). Whether distinct mechanisms or not, however, our data clearly show that color influences recognition and detection alike, such that once a target has been detected, the probability of it being correctly recognized does not depend on the presence of color. 
Acknowledgments
This work was supported by the German Research Foundation (DFG; grants: EI 852/1, EI 852/3, IRTG-1901, and SFB/TRR135). 
Commercial relationships: none. 
Corresponding author: Svenja Marx. 
Email: svenja.marx@physik.uni-marburg.de. 
Address: FB Physik, AG Neurophysik, Philipps-University, Marburg, Germany. 
References
Biederman I. (1972, July 7). Perceiving real-world scenes. Science, 177 (4043), 77–80. [CrossRef] [PubMed]
Bowers S. J. Jones K. W. (2008). Detecting objects is easier than categorizing them. Quarterly Journal of Experimental Psychology, 61 (4), 552–557. [CrossRef]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436. [CrossRef] [PubMed]
Chun M. M. Potter M. C. (1995). A two-stage model for multiple target detection in rapid serial visual presentation. Journal of Experimental Psychology: Human Perception and Performance, 21 (1), 109–127. [CrossRef] [PubMed]
Cornelissen F. W. Peters E. M. Palmer J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, 34 (4), 613–617. [CrossRef]
Dell' Acqua R. Jolicœur P. Pascali A. Pluchino P. (2007). Short-term consolidation of individual identities leads to lag-1 sparing. Journal of Experimental Psychology: Human Perception and Performance, 33 (3), 593–609. [CrossRef] [PubMed]
Delorme A. Richard G. Fabre-Thorpe M. (2000). Ultra-rapid categorisation of natural scenes does not rely on colour cues: A study in monkeys and humans. Vision Research, 40 (16), 2187–2200. [CrossRef] [PubMed]
Delorme A. Richard G. Fabre-Thorpe M. (2010). Key visual features for rapid categorization of animals in natural scenes. Frontiers in Psychology, 1, 21, doi:10.3389/fpsyg.2010.00021.
Derrington A. M. Krauskopf J. Lennie P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. Journal of Physiology, 357 (1), 241–265. [CrossRef] [PubMed]
Einhäuser W. Koch C. Makeig S. (2007). The duration of the attentional blink in natural scenes depends on stimulus category. Vision Research, 47 (5), 597–607. [CrossRef] [PubMed]
Elder J. H. Velisavljević L. (2009). Cue dynamics underlying rapid detection of animals in natural scenes. Journal of Vision, 9 (7): 7, 1–20, http://journalofvision.org/content/9/7/7, doi:10.1167/9.7.7. [PubMed] [Article]
Evans K. K. Treisman A. (2005). Perception of objects in natural scenes: Is it really attention free? Journal of Experimental Psychology: Human Perception and Performance, 31 (6), 1476–1492. [CrossRef] [PubMed]
Fabre-Thorpe M. Richard G. Thorpe S. J. (1998). Rapid categorization of natural images by rhesus monkeys. Neuroreport, 9 (2), 303–308. [CrossRef] [PubMed]
Fei-Fei L. Iyer A. Koch C. Perona P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision, 7 (1): 10, 1–29, http://journalofvision.org/content/7/1/10, doi:10.1167/7.1.10. [PubMed] [Article] [PubMed]
Fei-Fei L. VanRullen R. Koch C. Perona P. (2005). Why does natural scene categorization require little attention? Exploring attentional requirements for natural and synthetic stimuli. Visual Cognition, 12 (6), 893–924. [CrossRef]
Frey H.-P. Honey C. König P. (2008). What's color got to do with it? The influence of color on visual attention in different categories. Journal of Vision, 8 (14): 6, 1–17, http://journalofvision.org/content/8/14/6, doi:10.1167/8.14.6. [PubMed] [Article] [PubMed]
Gegenfurtner K. R. Rieger J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10 (13), 805–808. [CrossRef] [PubMed]
Grill-Spector K. Kanwisher N. (2005). Visual recognition: As soon as you know it is there, you know what it is. Psychological Science, 16 (2), 152–160. [CrossRef] [PubMed]
Hansen T. Gegenfurtner K. R. (2009). Independence of color and luminance edges in natural scenes. Visual Neuroscience, 26 (1), 35–49. [CrossRef] [PubMed]
Henmon V. A. C. (1911). The relation of the time of a judgment to its accuracy. Psychological Review, 18 (3), 186–201. [CrossRef]
Kirchner H. Thorpe S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46 (11), 1762–1776. [CrossRef] [PubMed]
Li F. F. VanRullen R. Koch C. Perona P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences, USA, 99 (14), 9596–9601. [CrossRef]
Mack M. L. Gauthier I. Sadr J. Palmeri T. J. (2008). Object detection and basic-level categorization: Sometimes you know it is there before you know what it is. Psychonomic Bulletin & Review, 15 (1), 28–35. [CrossRef] [PubMed]
Mack M. L. Palmeri T. J. (2010). Decoupling object detection and categorization. Journal of Experimental Psychology: Human Perception and Performance, 36 (5), 1067–1079. [CrossRef] [PubMed]
MacLean M. H. Arnell K. M. (2012). A conceptual and methodological framework for measuring and modulating the attentional blink. Attention, Perception & Psychophysics, 74 (6), 1080–1097. [CrossRef] [PubMed]
Macmillan N. A. (1993). Signal detection theory as data analysis method and psychological decision model. In Keren G. Lewis C. (Eds.), A handbook for data analysis in the behavioral sciences: methodological issues (pp. 21–57). Hillsdale, NJ: Lawrence Erlbaum Associates.
Martin E. W. Shapiro K. L. (2008). Does failure to mask T1 cause lag-1 sparing in the attentional blink? Perception & Psychophysics, 70 (3), 562–570. [CrossRef] [PubMed]
Maunsell J. H. R. Treue S. (2006). Feature-based attention in visual cortex. Trends in Neurosciences, 29 (6), 317–322. [CrossRef] [PubMed]
Meng M. Potter M. C. (2008). Detecting and remembering pictures with and without visual noise. Journal of Vision, 8 (9): 7, 1–10, http://www.journalofvision.org/content/8/9/7, doi:10.1167/8.9.7. [PubMed] [Article] [PubMed]
Motter B. C. (1994). Neural correlates of attentive selection for color or luminance in extrastriate area V4. Journal of Neuroscience, 14 (4), 2178–2189. [PubMed]
Oliva A. Schyns P. G. (2000). Diagnostic colors mediate scene recognition. Cognitive Psychology, 41 (2), 176–210. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442. [CrossRef] [PubMed]
Potter M. C. Levy E. I. (1969). Recognition memory for a rapid sequence of pictures. Journal of Experimental Psychology, 81 (1), 10–15. [CrossRef] [PubMed]
Raymond J. E. Shapiro K. L. Arnell K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18 (3), 849–860. [CrossRef] [PubMed]
Reddy L. Wilken P. Koch C. (2004). Face-gender discrimination is possible in the near-absence of attention. Journal of Vision, 4 (2): 4, 106–117, http://www.journalofvision.org/content/4/2/4, doi:10.1167/4.2.4. [PubMed] [Article] [PubMed]
Rousselet G. A. Fabre-Thorpe M. Thorpe S. J. (2002). Parallel processing in high-level categorization of natural images. Nature Neuroscience, 5 (7), 629–630. [PubMed]
Rousselet G. A. Joubert O. Fabre-Thorpe M. (2005). How long to get to the “gist” of real-world natural scenes? Visual Cognition, 12 (6), 852–877. [CrossRef]
Skiera G. Petersen D. Skalej M. Fahle M. (2000). Correlates of figure-ground segregation in fMRI. Vision Research, 40 (15), 2047–2056. [CrossRef] [PubMed]
Spence I. Wong P. Rusan M. Rastegar N. (2006). How color enhances visual memory for natural scenes. Psychological Science, 17 (1), 1–6. [CrossRef] [PubMed]
Thorpe S. J. Delorme A. VanRullen R. (2001). Spike-based strategies for rapid processing. Neural Networks, 14 (6), 715–725. [CrossRef] [PubMed]
Thorpe S. J. Fize D. Marlot C. (1996). Speed of processing in the human visual system. Nature, 381 (6582), 520–522. [CrossRef] [PubMed]
Thorpe S. J. Gautrais J. (1997). Rapid visual processing using spike asynchrony. In Mozer M. C. Jordan M. Petsche T. (Eds.), Advances in neural information processing systems 9 (pp. 901–907). Cambridge, MA: MIT Press.
VanRullen R. Thorpe S. J. (2001). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30 (6), 655–668. [CrossRef] [PubMed]
Visser T. A. W. Davis C. Ohan J. L. (2009). When similarity leads to sparing: Probing mechanisms underlying the attentional blink. Psychological Research, 73, 327–335. [CrossRef] [PubMed]
Visser T. A. W. Zuvic S. M. Bischof W. Di Lollo V. (1999). The attentional blink with targets in different spatial locations. Psychonomic Bulletin & Review, 6 (3), 432–436. [CrossRef] [PubMed]
Wichmann F. A. Braun D. I. Gegenfurtner K. R. (2006). Phase noise and the classification of natural images. Vision Research, 46 (8), 1520–1529. [CrossRef] [PubMed]
Wichmann F. A. Drewes J. Rosas P. Gegenfurtner K. R. (2010). Animal detection in natural scenes: critical features revisited. Journal of Vision, 10 (4): 6, 1–27, http://www.journalofvision.org/content/10/4/6, doi:10.1167/10.4.6. [PubMed] [Article] [CrossRef] [PubMed]
Wichmann F. A. Sharpe L. T. Gegenfurtner K. R. (2002). The contributions of color to recognition memory for natural scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28 (3), 509–520. [CrossRef] [PubMed]
Wurm L. H. Legge G. E. Isenberg L. M. Luebker A. (1993). Color improves object recognition in normal and low vision. Journal of Experimental Psychology: Human Perception and Performance, 19 (4), 899–911. [CrossRef] [PubMed]
Yao A. Y. J. Einhäuser W. (2008). Color aids late but not early stages of rapid natural scene recognition. Journal of Vision, 8 (16): 12, 1–13, http://www.journalofvision.org/content/8/16/12, doi:10.1167/8.16.12. [PubMed] [Article] [PubMed]
Zhang W. Luck S. J. (2009). Feature-based attention modulates feedforward visual processing. Nature Neuroscience, 12 (1), 24–25. [CrossRef] [PubMed]
Appendix A: Individual responses
With a few exceptions, observers in all experiments show consistent patterns with respect to their performance. In general, all are conservative (making more misses than false alarms), and observers with comparably liberal criteria tend to remain so across all conditions (Figure 6). Considering all nine combinations of ground truth and response for Experiments 1–3, the incidence of double false alarms is small (Figure A1) and—except for a few individuals and conditions—so are false alarms in single-target trials (i.e., two reported targets where one is correct). 
Figure A1
 
Raw data, all combinations of ground truth and response. For Experiments 1–3, there were nine combinations of correct responses (truth) and actual responses (response). For each individual, experiment, and condition, the raw percentage of responses for the respective truth are color-coded and provided (i.e., columns sum to 100%). The large matrix on the top right defines signal detection theory (SDT) terms. Note that “hit(*)” contains 1 hit and 1 false alarm (truth 1, response 2) and yields the two different definitions of hit used in the article.
Figure A1
 
Raw data, all combinations of ground truth and response. For Experiments 1–3, there were nine combinations of correct responses (truth) and actual responses (response). For each individual, experiment, and condition, the raw percentage of responses for the respective truth are color-coded and provided (i.e., columns sum to 100%). The large matrix on the top right defines signal detection theory (SDT) terms. Note that “hit(*)” contains 1 hit and 1 false alarm (truth 1, response 2) and yields the two different definitions of hit used in the article.
Appendix B: Analysis of hits and false alarms
In all experiments, we find main effects of condition for hits (in either definition) and for false alarms, while in Experiment 4 we additionally find a main effect of SOA for hits though not for false alarms (Table 1) .There are significantly more hits (in either definition) for color than for grayscale images across all experiments and SOAs, with the exception of the shortest SOA (30 ms) in Experiment 4 (Table 3). Interestingly, such a difference cannot be identified for false alarms (Table 4). With respect to hit rates, the color-inverted condition is different from grayscale and indistinguishable from original color at the 100-ms SOA of Experiment 2 when hits are defined as reporting at least one target in one-target trials (Table 3). If we instead restrict hits to correct responses (response = 1 for one-target streams), the picture reverses and now the color-inverted condition is indistinguishable from grayscale but yields significantly fewer hits than the original-color condition. For the 50 ms of Experiment 3, this reversed pattern holds for either definition. This underlines the importance of conducting Experiment 4, where zero targets and one were the only possible response options. Considering hits alone, the color-inverted condition is indistinguishable from gray for all SOAs of Experiment 4 and different from color for 30, 50, and 90 ms (Table 3). In the gray-inverted condition, significantly fewer hits than in any other condition in all experiments can be observed: response > 0: all t(7) > 2.74, all ps < 0.03; for response = 1: all t(7) > 2.81, all ps < 0.026. 
Appendix C: Detailed analysis of the attentional blink
In the separate analysis of color and grayscale sequences, we find a significant main effect of lag on the probability that both targets are detected: gray: F(4, 28) = 18.63, p = 1.4 × 10−7; color: F(4, 28) = 33.26, p = 2.9 × 10−10). There is a monotonic increase in performance up to lag 4 (Figure 4A). Pairwise post hoc tests show in both color conditions that lags 1, 2, and 3 are significantly different from lag 7, all ts > 2.5, all ps < 0.05, while lag 4 is not different from lag 7: gray: t(7) = 0.48, p = 0.64; color: t(7) = 1.05, p = 0.33. Using the squared single-target hit rate of the respective color category as a baseline, lags 1, 2, and 3 are different from this baseline, all ts > 2.5, all ps < 0.05, while lags 4 and 7 are indistinguishable from the baseline, all ts < 2.01, all ps > 0.08. Similarly, in Experiments 2 and 3, where we tested only lag 2, there is a significant difference between the two-target hit rate and the baseline in each color condition: Experiment 2: all ts > 2.4, all ps < 0.04, Figure 4B; Experiment 3: all ts > 3.24, all ps < 0.014, Figure 4C
Appendix D: Single-target recognition given detection in Experiments 2, 3, and 4
In Experiments 2 and 3, the main effect of color condition on recognition-given-detection performance is solely explained by the gray-inverted category: While the gray-inverted condition is different from all other conditions—Experiment 2: ts > 6.06, ps < 5.1 × 10−4; Experiment 3: ts > 4.22, ps < 0.0039—there are no pairwise differences between any of the other conditions: Experiment 2: all ts < 1.71, ps > 0.13; Experiment 3: ts < 1.66, ps > 0.14. In Experiment 4, the main effects of SOA and color on this measure also result almost entirely from the difference between the gray-inverted and all other conditions, all ts > 2.99, all ps < 0.02, with the exception of the difference between gray inverted and color at an SOA of 100 ms, which is not significant, t(7) = 1.54, p = 0.17, and the difference between grayscale and color inverted at an SOA of 120 ms, which is significant, t(7) = 2.90, p = 0.023. 
Appendix E: Recognition in two-target trials with one hit
When one target is detected and one is missed, neither the probability that the reported target category matches the first target nor that it matches the second target depends on lag or color condition: T1, color condition: F(1, 24) = 0.53, p = 0.49; T1, lag: F(4, 24) = 1.81, p = 0.16; T1, interaction: F(4, 24) = 0.61, p = 0.66; T2: F(1, 24) = 0.29, p = 0.61; F(4, 24) = 1.61, p = 0.20; F(4, 24) = 0.50, p = 0.73—observer #5 could not be included in this particular analysis, as she had no miss trial in one of the two-target conditions (lag 4, color). In Experiments 2 and 3 similarly, given that one target is detected in a two-target stream, the probability that it matched T1 or T2 does not depend on color condition: Experiment 2, T1: F(3, 21) = 0.45, p = 0.72; T2: F(3, 21) = 2.65, p = 0.075; Experiment 3, T1: F(3, 21) = 1.56, p = 0.23; T2: F(3, 21) = 0.81, p = 0.50. Hence color has no effect on attention or recognition beyond the effect that is already explained by detection. 
Table 3
 
Hit rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
Table 3
 
Hit rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Gray vs. original color Gray vs. color inverted Color inverted vs. original color
Response > 0
Experiment 1 100 ms 90.6% ± 5.5% 94.6% ± 3.1% t(7) = 3.01
p= 0.020
Experiment 2 100 ms 83.1% ± 10.4% 89.4% ± 7.7% 91.5% ± 6.7% 92.5% ± 6.1% t(7) = 3.84 t(7) = 3.54 t(7) = 2.11
p= 0.0064 p= 0.0095 p = 0.072
Experiment 3 50 ms 55.9% ± 14.5% 61.6% ± 12.4% 60.7% ± 12.9% 66.2% ± 11.2% t(7) = 2.93 t(7) = 0.71 t(7) = 3.55
p= 0.022 p = 0.50 p= 0.0094
Response = 1
Experiment 1 100 ms 86.1% ± 5.8% 90.9% ± 4.7% t(7) = 4.47
p= 0.0029
Experiment 2 100 ms 76.5% ± 7.26% 85.7% ± 5.2% 87.1% ± 5.8% 89.7% ± 5.1% t(7) = 5.16 t(7) = 1.45 t(7) = 6.52
p= 0.0013 p = 0.19 p= 3 × 10−4
Experiment 3 50 ms 51.9% ± 12.6% 58.1% ± 11.0% 58.3% ± 12.3% 63.2% ± 9.8% t(7) = 3.81 t(7) = 0.13 t(7) = 3.93
p= 0.0066 p = 0.90 p= 0.0056
Experiment 4
30 ms 23.8% ± 15.3% 32.2% ± 17.9% 28.9% ± 13.0% 34.8% ± 14.3% t(7) = 0.94 t(7) = 1.17 t(7) = 2.38
p = 0.38 p = 0.28 p= 0.049
50 ms 46.4% ± 9.9% 55.6% ± 9.9% 54.1% ± 14.3% 62.2% ± 10.2% t(7) = 4.93 t(7) = 0.86 t(7) = 3.97
p= 0.0017 p = 0.42 p= 0.0054
60 ms 57.8% ± 13.8% 65.0% ± 11.7% 66.7% ± 10.1% 69.4% ± 9.7% t(7) = 2.70 t(7) = 1.88 t(7) = 1.54
p= 0.03 p = 0.10 p = 0.17
90 ms 71.6% ± 9.7% 82.8% ± 9.0% 81.3% ± 10.1% 88.1% ± 6.9% t(7) = 5.49 t(7) = 1.09 t(7) = 4.00
p= 0.00091 p = 0.31 p= 0.0052
100 ms 77.3% ± 10.0% 85.2% ± 6.1% 85.3% ± 9.1% 88.3% ± 6.8% t(7) = 2.82 t(7) = 0.094 t(7) = 2.04
p= 0.026 p = 0.93 p = 0.081
120 ms 83.0% ± 10.2% 88.9% ± 6.3% 91.4% ± 6.5% 93.1% ± 4.5% t(7) = 4.78 t(7) = 1.74 t(7) = 1.55
p= 0.0020 p = 0.13 p = 0.16
Table 4
 
False-alarm rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
Table 4
 
False-alarm rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Grayscale vs. original color Grayscale vs. color inverted Color inverted vs. original color
Experiment 1 100 ms 7.4% ± 12.9 7.2% ± 13.8% t(7) = 0.27
p = 0.80
Experiment 2 100 ms 11.6% ± 14.2% 6.6% ± 8.2% 6.0% ± 7.9% 6.3% ± 7.3% t(7) = 0.42 t(7) = 0.61 t(7) = 0.36
p = 0.68 p = 0.56 p = 0.73
Experiment 3 50 ms 14.4% ± 9.1% 9.8% ± 6.1% 10.4% ± 7.5% 9.4% ± 4.0% t(7) = 0.34 t(7) = 0.47 t(7) = 0.74
p = 0.75 p = 0.66 p = 0.48
Experiment 4 30 ms 6.6% ± 7.3% 4.0% ± 7.6% 3.9% ± 7.6% 3.3% ± 4.3% t(7) = 0.65 t(7) = 0.28 t(7) = 0.49
p = 0.54 p = 0.78 p = 0.64
50 ms 6.6% ± 6.5% 4.5% ± 6.8% 4.1% ± 3.6% 4.1% ± 4.5% t(7) = 0.50 t(7) = 0.36 t(7) = 0
p = 0.63 p = 0.73 p = 1
60 ms 7.7% ± 10.4% 5.5% ± 6.2% 5.3% ± 5.0% 3.9% ± 3.0% t(7) = 0.80 t(7) = 0.14 t(7) = 1.05
p = 0.45 p = 0.89 p = 0.33
90 ms 5.8% ± 4.8% 3.1% ± 2.2% 2.7% ± 1.8% 2.2% ± 1.5% t(7) = 1.53 t(7) = 0.63 t(7) = 1.16
p = 0.17 p = 0.55 p = 0.28
100 ms 6.4% ± 4.6% 3.0% ± 1.8% 4.5% ± 3.4% 1.6% ± 2.0% t(7) = 2.05 t(7) = 1.72 t(7) = 4.46
p = 0.080 p = 0.13 p= 0.0029
120 ms 7.7% ± 6.3% 3.0% ± 1.3% 2.3% ± 1.4% 3.0% ± 1.9% t(7) = 0 t(7) = 1.72 t(7) = 0.83
p = 1 p = 0.13 p = 0.43
Figure 1
 
Stimuli and procedure. (a) Example stimuli of all four target categories (feline, canine, ungulate, avian) and two example distractor images. Image modifications: (b) grayscale, (c) color inverted, (d) gray inverted. (e) Procedure. Depicted times correspond to Experiments 1 and 2; in all experiments, targets occurred between serial positions 6 and 15.
Figure 1
 
Stimuli and procedure. (a) Example stimuli of all four target categories (feline, canine, ungulate, avian) and two example distractor images. Image modifications: (b) grayscale, (c) color inverted, (d) gray inverted. (e) Procedure. Depicted times correspond to Experiments 1 and 2; in all experiments, targets occurred between serial positions 6 and 15.
Figure 2
 
Detection in zero-target and single-target trials. (A) Hit rate for Experiments 1–3 (left, sorted by SOA) and the different SOAs of Experiment 4. Different colors code different conditions (blue: original color; red: color inverted; black: grayscale; gray: gray inverted). The left panel defines hits as any response to single-target trials (response 1 or 2), the middle panel as an exact response (response 1). For Experiment 4, there was no two-target option. (B) False alarms. Notation as in (A). Left panel: any false alarm (response 1 or 2); middle panel: single false alarm (response 1). (C) Value of d′ as computed from z-scored hit and false-alarm rates of (A) and (B); the “ >0” definition of hits and false alarms is used for this computation. Error bars are mean and standard error of the mean across observers.
Figure 2
 
Detection in zero-target and single-target trials. (A) Hit rate for Experiments 1–3 (left, sorted by SOA) and the different SOAs of Experiment 4. Different colors code different conditions (blue: original color; red: color inverted; black: grayscale; gray: gray inverted). The left panel defines hits as any response to single-target trials (response 1 or 2), the middle panel as an exact response (response 1). For Experiment 4, there was no two-target option. (B) False alarms. Notation as in (A). Left panel: any false alarm (response 1 or 2); middle panel: single false alarm (response 1). (C) Value of d′ as computed from z-scored hit and false-alarm rates of (A) and (B); the “ >0” definition of hits and false alarms is used for this computation. Error bars are mean and standard error of the mean across observers.
Figure 3
 
Value of d′ difference to grayscale. Difference of the different color conditions to grayscale. Mean and standard error of the mean across observers.
Figure 3
 
Value of d′ difference to grayscale. Difference of the different color conditions to grayscale. Mean and standard error of the mean across observers.
Figure 4
 
Attentional blink. (A–C) Detection rate for both targets in the two-target sequences; the dashed line indicates the baseline (squared detection rate of the single-target sequences) (A) at different lags in Experiment 1, (B) at lag 2 (200 ms) in Experiment 2, and (C) at lag 2 (100 ms) in Experiment 3. (D–F) Baseline-corrected detection rate in two-target sequences in (D) Experiment 1, (E) Experiment 2, and (F) Experiment 3.
Figure 4
 
Attentional blink. (A–C) Detection rate for both targets in the two-target sequences; the dashed line indicates the baseline (squared detection rate of the single-target sequences) (A) at different lags in Experiment 1, (B) at lag 2 (200 ms) in Experiment 2, and (C) at lag 2 (100 ms) in Experiment 3. (D–F) Baseline-corrected detection rate in two-target sequences in (D) Experiment 1, (E) Experiment 2, and (F) Experiment 3.
Figure 5
 
Recognition given detection. Number of single-target trials for which the correct category was reported divided by the number of single-target trials in which at least one target was reported (hits). Colors as in Figure 2; bars denote standard error of the mean across observers.
Figure 5
 
Recognition given detection. Number of single-target trials for which the correct category was reported divided by the number of single-target trials in which at least one target was reported (hits). Colors as in Figure 2; bars denote standard error of the mean across observers.
Figure 6
 
Raw data, hit rates, and false-alarm rates. All hit and false-alarm rates for all experiments, conditions, and SOAs. Within each experiment, the same color denotes the same observer. Responses 1 and 2 are counted as false alarms and hits for this representation. If data points were exactly overlapping, the covered data point was moved horizontally out of the axes and connected to its original location with a thin line.
Figure 6
 
Raw data, hit rates, and false-alarm rates. All hit and false-alarm rates for all experiments, conditions, and SOAs. Within each experiment, the same color denotes the same observer. Responses 1 and 2 are counted as false alarms and hits for this representation. If data points were exactly overlapping, the covered data point was moved horizontally out of the axes and connected to its original location with a thin line.
Figure A1
 
Raw data, all combinations of ground truth and response. For Experiments 1–3, there were nine combinations of correct responses (truth) and actual responses (response). For each individual, experiment, and condition, the raw percentage of responses for the respective truth are color-coded and provided (i.e., columns sum to 100%). The large matrix on the top right defines signal detection theory (SDT) terms. Note that “hit(*)” contains 1 hit and 1 false alarm (truth 1, response 2) and yields the two different definitions of hit used in the article.
Figure A1
 
Raw data, all combinations of ground truth and response. For Experiments 1–3, there were nine combinations of correct responses (truth) and actual responses (response). For each individual, experiment, and condition, the raw percentage of responses for the respective truth are color-coded and provided (i.e., columns sum to 100%). The large matrix on the top right defines signal detection theory (SDT) terms. Note that “hit(*)” contains 1 hit and 1 false alarm (truth 1, response 2) and yields the two different definitions of hit used in the article.
Table 1
 
ANOVAs for effect of color condition (Experiments 2–4) and SOA (Experiment 4) for hits, false alarms, and d′. In Experiments 2 and 3, two different definitions of hits are tested: response to a one-target trial of at least 1 (“ > 0”) or exactly 1 (“ = 1”). Bold type indicates a significant effect.
Table 1
 
ANOVAs for effect of color condition (Experiments 2–4) and SOA (Experiment 4) for hits, false alarms, and d′. In Experiments 2 and 3, two different definitions of hits are tested: response to a one-target trial of at least 1 (“ > 0”) or exactly 1 (“ = 1”). Bold type indicates a significant effect.
SOA Effect of Hits False alarms d′
Experiment 2 100 ms Condition F(3, 21) = 15.7 F(3, 21) = 4.57 F(3, 21) = 35.77
p= 1.38 × 10−5 p= 0.013 p= 1.95 × 10−8
(response > 0)
F(3, 21) = 42.05
p= 4.7 × 10−9
(response = 1)
Experiment 3 50 ms Condition F(3, 21) = 11.6 F(3, 21) = 4.23 F(3, 21) = 7.26
p= 1.07 × 10−4 p= 0.017 p= 0.0016
(response > 0)
F(3, 21) = 17.68
p= 5.8 × 10−6
(response = 1)
Experiment 4 All SOA F(5, 35) = 128.37 F(5, 35) = 0.78 F(5, 35) = 87.10
p< 10−20 p = 0.57 p< 10−20
Condition F(3, 21) = 69.89 F(3, 21) = 6.65 F(3, 21) = 72.65
p= 4.28 × 10−11 p= 0.0025 p= 2.96 × 10−11
SOA × Condition F(15, 105) = 1.30 F(15, 105) = 0.81 F(15, 105) = 1.51
p = 0.22 p = 0.66 p = 0.11
Table 2
 
Values of d′ and post hoc comparisons of interest. Bold type indicates a significant effect; all d′ values are mean ± standard deviation.
Table 2
 
Values of d′ and post hoc comparisons of interest. Bold type indicates a significant effect; all d′ values are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Gray vs. color Gray vs. color inverted Color inverted vs. original color
Experiment 1
 100 ms 3.09 ± 0.54 3.52 ± 0.66 t(7) = 2.61
p= 0.035
Experiment 2
 100 ms 2.52 ± 0.40 3.11 ± 0.46 3.36 ± 0.45 3.37 ± 0.49 t(7) = 2.81 t(7) = 3.30 t(7) = 0.096
p= 0.026 p= 0.013 p = 0.93
Experiment 3
 50 ms 1.29 ± 0.16 1.67 ± 0.32 1.70 ± 0.41 1.78 ± 0.19 t(7) = 1.07 t(7) = 0.23 t(7) = 0.62
p = 0.32 p = 0.83 p = 0.56
Experiment 4
 30 ms 0.90 ± 0.24 1.54 ± 0.31 1.48 ± 0.34 1.58 ± 0.31 t(7) = 0.46 t(7) = 0.39 t(7) = 0.63
p = 0.66 p = 0.71 p = 0.55
 50 ms 1.56 ± 0.50 2.04 ± 0.33 1.94 ± 0.32 2.19 ± 0.29 t(7) = 1.30 t(7) = 0.80 t(7) = 2.10
p = 0.23 p = 0.45 p = 0.074
 60 ms 1.88 ± 0.39 2.17 ± 0.56 2.21 ± 0.38 2.39 ± 0.50 t(7) = 1.29 t(7) = 0.22 t(7) = 2.08
p = 0.24 p = 0.83 p = 0.076
 90 ms 2.31 ± 0.49 2.93 ± 0.38 2.93 ± 0.53 3.31 ± 0.33 t(7) = 3.34 t(7) = 0.0019 t(7) = 3.81
p= 0.012 p = 1.0 p= 0.0066
 100 ms 2.42 ± 0.34 3.01 ± 0.40 2.92 ± 0.44 3.45 ± 0.44 t(7) = 3.16 t(7) = 0.67 t(7) = 4.54
p= 0.016 p = 0.52 p= 0.0027
 120 ms 2.58 ± 0.53 3.24 ± 0.39 3.51 ± 0.53 3.53 ± 0.36 t(7) = 2.41 t(7) = 1.91 t(7) = 0.16
p= 0.047 p = 0.098 p = 0.88
Table 3
 
Hit rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
Table 3
 
Hit rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Gray vs. original color Gray vs. color inverted Color inverted vs. original color
Response > 0
Experiment 1 100 ms 90.6% ± 5.5% 94.6% ± 3.1% t(7) = 3.01
p= 0.020
Experiment 2 100 ms 83.1% ± 10.4% 89.4% ± 7.7% 91.5% ± 6.7% 92.5% ± 6.1% t(7) = 3.84 t(7) = 3.54 t(7) = 2.11
p= 0.0064 p= 0.0095 p = 0.072
Experiment 3 50 ms 55.9% ± 14.5% 61.6% ± 12.4% 60.7% ± 12.9% 66.2% ± 11.2% t(7) = 2.93 t(7) = 0.71 t(7) = 3.55
p= 0.022 p = 0.50 p= 0.0094
Response = 1
Experiment 1 100 ms 86.1% ± 5.8% 90.9% ± 4.7% t(7) = 4.47
p= 0.0029
Experiment 2 100 ms 76.5% ± 7.26% 85.7% ± 5.2% 87.1% ± 5.8% 89.7% ± 5.1% t(7) = 5.16 t(7) = 1.45 t(7) = 6.52
p= 0.0013 p = 0.19 p= 3 × 10−4
Experiment 3 50 ms 51.9% ± 12.6% 58.1% ± 11.0% 58.3% ± 12.3% 63.2% ± 9.8% t(7) = 3.81 t(7) = 0.13 t(7) = 3.93
p= 0.0066 p = 0.90 p= 0.0056
Experiment 4
30 ms 23.8% ± 15.3% 32.2% ± 17.9% 28.9% ± 13.0% 34.8% ± 14.3% t(7) = 0.94 t(7) = 1.17 t(7) = 2.38
p = 0.38 p = 0.28 p= 0.049
50 ms 46.4% ± 9.9% 55.6% ± 9.9% 54.1% ± 14.3% 62.2% ± 10.2% t(7) = 4.93 t(7) = 0.86 t(7) = 3.97
p= 0.0017 p = 0.42 p= 0.0054
60 ms 57.8% ± 13.8% 65.0% ± 11.7% 66.7% ± 10.1% 69.4% ± 9.7% t(7) = 2.70 t(7) = 1.88 t(7) = 1.54
p= 0.03 p = 0.10 p = 0.17
90 ms 71.6% ± 9.7% 82.8% ± 9.0% 81.3% ± 10.1% 88.1% ± 6.9% t(7) = 5.49 t(7) = 1.09 t(7) = 4.00
p= 0.00091 p = 0.31 p= 0.0052
100 ms 77.3% ± 10.0% 85.2% ± 6.1% 85.3% ± 9.1% 88.3% ± 6.8% t(7) = 2.82 t(7) = 0.094 t(7) = 2.04
p= 0.026 p = 0.93 p = 0.081
120 ms 83.0% ± 10.2% 88.9% ± 6.3% 91.4% ± 6.5% 93.1% ± 4.5% t(7) = 4.78 t(7) = 1.74 t(7) = 1.55
p= 0.0020 p = 0.13 p = 0.16
Table 4
 
False-alarm rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
Table 4
 
False-alarm rates and post hoc comparisons of interest. Bold type indicates a significant effect; all percentages are mean ± standard deviation.
SOA Gray inverted Grayscale Color inverted Original color Grayscale vs. original color Grayscale vs. color inverted Color inverted vs. original color
Experiment 1 100 ms 7.4% ± 12.9 7.2% ± 13.8% t(7) = 0.27
p = 0.80
Experiment 2 100 ms 11.6% ± 14.2% 6.6% ± 8.2% 6.0% ± 7.9% 6.3% ± 7.3% t(7) = 0.42 t(7) = 0.61 t(7) = 0.36
p = 0.68 p = 0.56 p = 0.73
Experiment 3 50 ms 14.4% ± 9.1% 9.8% ± 6.1% 10.4% ± 7.5% 9.4% ± 4.0% t(7) = 0.34 t(7) = 0.47 t(7) = 0.74
p = 0.75 p = 0.66 p = 0.48
Experiment 4 30 ms 6.6% ± 7.3% 4.0% ± 7.6% 3.9% ± 7.6% 3.3% ± 4.3% t(7) = 0.65 t(7) = 0.28 t(7) = 0.49
p = 0.54 p = 0.78 p = 0.64
50 ms 6.6% ± 6.5% 4.5% ± 6.8% 4.1% ± 3.6% 4.1% ± 4.5% t(7) = 0.50 t(7) = 0.36 t(7) = 0
p = 0.63 p = 0.73 p = 1
60 ms 7.7% ± 10.4% 5.5% ± 6.2% 5.3% ± 5.0% 3.9% ± 3.0% t(7) = 0.80 t(7) = 0.14 t(7) = 1.05
p = 0.45 p = 0.89 p = 0.33
90 ms 5.8% ± 4.8% 3.1% ± 2.2% 2.7% ± 1.8% 2.2% ± 1.5% t(7) = 1.53 t(7) = 0.63 t(7) = 1.16
p = 0.17 p = 0.55 p = 0.28
100 ms 6.4% ± 4.6% 3.0% ± 1.8% 4.5% ± 3.4% 1.6% ± 2.0% t(7) = 2.05 t(7) = 1.72 t(7) = 4.46
p = 0.080 p = 0.13 p= 0.0029
120 ms 7.7% ± 6.3% 3.0% ± 1.3% 2.3% ± 1.4% 3.0% ± 1.9% t(7) = 0 t(7) = 1.72 t(7) = 0.83
p = 1 p = 0.13 p = 0.43
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×