June 2011
Volume 11, Issue 7
Free
Article  |   June 2011
Cue relevance effects in conjunctive visual search: Cueing for location, color, and orientation
Author Affiliations
  • Xiaohua Zhuang
    Sections of Surgical Research & Ophthalmology and Visual Science, Department of Surgery, University of Chicago, Chicago, IL, USA[email protected]
  • Thomas V. Papathomas
    Department of Psychology, Rutgers University, New Brunswick, NJ, USA
    Department of Biomedical Engineering, Rutgers University, New Brunswick, NJ, USA
    Center for Cognitive Science, Rutgers University, New Brunswick, NJ, USAhttp://ruccs.rutgers.edu/~papathom/[email protected]
Journal of Vision June 2011, Vol.11, 6. doi:https://doi.org/10.1167/11.7.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xiaohua Zhuang, Thomas V. Papathomas; Cue relevance effects in conjunctive visual search: Cueing for location, color, and orientation. Journal of Vision 2011;11(7):6. https://doi.org/10.1167/11.7.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Performance in visual search tasks where the target differs from distractors by a conjunction of features can improve when a precue signals observers to limit their search by attending to a subset of elements. The current experiments were designed to study the temporal characteristics of precueing the location or feature (color or orientation) of targets in color–orientation conjunctive searches. Color (sensory or symbolic), location, or orientation precues preceded the search stimulus with cue-to-stimulus onset asynchrony (SOA) in the range of 50–750 ms. The stimuli consisted of elements formed by combining horizontal/vertical and red/green features. Observers responded to the presence/absence of an odd element in an “odd-man-out” paradigm. Reaction time and accuracy were used as measures of performance. Color and location precues improved search performance. The magnitude of improvement did not vary as the SOA changed. The color and location cues exhibited their effects in guiding visual search even at 50 ms after cue onset. However, orientation precue did not facilitate nor inhibit the search processing. These results indicate that in conjunctive search it takes at most 50 ms after cue onset for the visual system to exert the guidance effects using color and location precueing. We speculate that color may be a stronger feature than orientation for segmenting the search elements, thus facilitating visual search.

Introduction
One of the common tools to investigate how attention influences visual information processing has been the visual search task. In particular, the visual search paradigm has been one of the major methods used to study how the visual system extracts and combines features. In the real world, as well as in the laboratory, a visual search task requires observers to locate a target among many distractors. The target differs from the distractors by one or more features (color, shape, etc.). For instance, there are often occasions in everyday life where one needs to look for his/her friend who is in a red shirt in a crowd. In such a case, the color feature red becomes a defining characteristic to base the search on. Studies on attention have demonstrated that, when selective attention is directed to a certain feature, the processing of that feature is enhanced throughout the visual field (Liu, Stevens, & Carrasco, 2007; Martinez-Trujillo & Treue, 2004; McAdams & Maunsell, 2000; Melcher, Papathomas, & Vidnyanszky, 2005; Saenz, Buracas, & Boynton, 2002, 2003; Treue & Martinez-Trujillo, 1999). Thus, the feature-based selective attention mechanism is very important and useful for visual search tasks. 
In general, single feature search and conjunction search are two of the most common visual search tasks in the literature (Andersen, Hillyard, & Müller, 2008; Anderson, Heinke, & Humphreys, 2010; Cave & Wolfe, 1990; Dosher, Han, & Lu, 2010; Olds & Fockler, 2004; Olds, Graham, & Jones, 2009; Proulx, 2007; Thornton & Gilden, 2007; Treisman & Gelade, 1980; Treisman, Vieira, & Hayes, 1992; Weidner, Krummenacher, Reimann, Müller, & Fink, 2009; Wolfe, 1994, 1998; Wolfe, Cave, & Franzel, 1989). A single feature search task, commonly referred to as “feature, or singleton, search task,” in which the target differs from distractors on the basis of a single feature dimension (e.g., searching for a horizontal oriented target among many vertical oriented distractors), is thought to involve a parallel processing. The reaction time (RT) and accuracy to search for the target remain essentially unchanged as the number of distractors increases in a feature search task. By contrast, a conjunction search task, in which the target differs from distractors on the basis of a conjunction of two or more feature dimensions (e.g., searching for a red–horizontal target among red–vertical and green–horizontal distractors), is thought to involve a serial processing. In this case, RT increases and accuracy decreases as the number of distractors increases. 
Regardless of whether visual search is parallel or serial, it has been well documented that visual search could be aided by cues that engage spatial or feature-based selective attention. Spatial attention could increase the saliency of the objects in the attended location as well as guide eye movement, thereby facilitating the search performance (Bichot, Rossi, & Desimone, 2005; Egner et al., 2008; Khurana & Kowler, 1987; Lu & Sperling, 1996; Luck, Chelazzi, Hillyard, & Desimone, 1997; Olds & Fockler, 2004; Posner, Snyder, & Davidson, 1980; Shih & Sperling, 1996). According to theories of feature-based attention, the processing of an attended feature is enhanced throughout the visual field, which makes searching for the object that possesses the attended feature easier and faster. Psychophysical studies have reported that visual search can be limited to a subset of the entire display on the basis of a particular feature (e.g., color, shape; Andersen et al., 2008; Anderson et al., 2010; Bacon & Egeth, 1997; Egeth, Virzi, & Garbart, 1984; Kaptein, Theeuwes, & Van der Heljden, 1995; Olds & Fockler, 2004; Olds et al., 2009; Proulx, 2007; Zohary & Hochstein, 1989). Performance in visual search can also benefit from feature-based attention implicitly. Studies on priming effects indicated that visual search performance was facilitated if the to-be-searched targets shared the same feature as the targets in previous trials (Geyer, Shi, & Müller, 2010; Geyer, Zehetleitner, & Müller, 2010; Kristjánsson, Wang, & Nakayama, 2002; Maljkovic & Nakayama, 1994, 1996; Wolfe, Butcher, Lee, & Hyle, 2003). 
In the psychophysical literature, the feature-based attentional effect on visual search was traditionally examined under block design paradigms (Bacon & Egeth, 1997; Egeth et al., 1984; Kaptein et al., 1995; Olds & Fockler, 2004; Sobel, Pickard, & Acklin, 2009), namely, a target was well defined and fixed for an entire block, and observers were asked to attend to that particular target throughout the block. We know that visual search is usually faster when the target is the same throughout the block than when it is different from trial to trial (Geyer, Müller, & Krummenacher, 2006; Kristjánsson, 2006; Maljkovic & Nakayama, 1994). Using block designs, researchers have found that some features may be more efficient than other features in guiding visual search processes (Olds & Fockler, 2004; Sobel et al., 2009). Olds and Fockler (2004) compared how color and orientation previews of the search stimuli affect observers' performance in searching for a fixed target throughout the experiment. Results were mixed in their study, but, overall, color preview proved more effective than orientation preview. In one experiment, they found that color preview had no effect, whereas orientation preview decreased search efficiency; in another experiment, color preview increased search efficiency, whereas orientation preview had no effect. Using the same paradigm, Sobel et al. (2009) also demonstrated that color preview is more helpful than orientation preview in visual conjunctive search. 
Both studies used target–block designs. In this kind of design, as respondents search for the exact same target throughout the whole block, it allows sufficient time for a top-down process to fully exhibit its modulation effects on visual search. Therefore, it allows one to study the magnitude of the top-down guidance effects on visual search. However, it does not allow one to investigate the time course of such effects because it confounds the impact of long-term top-down guidance and instantaneous cueing. In this study, we used a trial-by-trial design to deconfound these factors and isolate the effect of instantaneous cueing. Several recent studies have examined the effect of feature-based attention on visual conjunction search on a trial-by-trial basis. Wolfe, Horowitz, Kenner, Hyle, and Vasan (2004) employed a cueing paradigm and asked observers to search the target specified by an exact pictorial cue or an exact symbolic (word) cue presented before the search stimuli at the beginning of each trial. They found that a picture cue was more effective than a word cue in directing the visual conjunctive search. In addition, the trial-by-trial conjunction search could be as efficient as a blocked conjunction search with the aid of an exact pictorial cue. An exact cue could help observers to configure the new target in advance, before the search stimuli were presented. An interesting question is how efficiently the visual system could utilize the partial information of the target to facilitate the conjunctive search on a trial-by-trial basis. In other words, instead of providing all the information about the target in advance at the beginning of each trial, only one of the features of the target is cued before the search stimuli. Anderson et al. (2010) used a similar cueing paradigm and have obtained evidence that search becomes more efficient with color cueing as compared to orientation cueing. Nevertheless, they did not investigate how much earlier the partial cueing information should be presented in order for it to assist in a visual search task. In the current study, we aimed to investigate this question. Observers searched for the target in a trial-by-trial odd-man-out paradigm. We provided a cue that shared an attribute—color or orientation or position (visual hemifield)—with the odd-man target at the beginning of each trial; this potentially narrows the search to half of the search elements. We examined the temporal course of cueing the attributes of color, orientation, and position in the conjunction search task. The main finding is that positional and color sensory cues exert facilitation effects as early as 50 ms after cue onset and the magnitude of the facilitation effects does not fade out within the time window tested in the current study (50–750 ms after cue onset), whereas orientation cues do not affect the search effectiveness within the tested time window. It is plausible, although only an unconfirmed conjecture, that precueing initiates a segmentation process, namely, the cue has the effect of segmenting the search elements into two groups (spatial separation for positional cues; chromatic segmentation for color cues; figural segmentation for orientation cues) that can potentially speed up the resulting (easier) search task. Under this assumption, our results may reflect the ability of the cued attribute to segment the search elements that can potentially speed up search performance. 
Methods
Observers
Eight observers, including one of the authors (XZ), participated in each cue type condition of the experiment. All observers had normal or corrected-to-normal visual acuity. Except for the author, all other observers were naive as to the purpose of the experiment. 
Stimuli
The search stimuli consisted of eleven elements, formed by combining horizontal/vertical (H/V) orientation with red/green (R/G) color. The centers of the elements were arranged on an imaginary, near-circular contour at an average eccentricity of 6° around a central fixation point, as in Figure 1. Incidentally, the elements in Figure 1 appear to be arranged on a near-square, not a near-circular, contour; however, this percept must be due to the alignment of some of their (random) orientations. The elements were roughly equidistant from each other and presented against a black background. The viewing distance was 60 cm. The size of each element was 0.3 × 1.3 degrees. Targets differed from the other elements (distractors) in terms of the combination of color and orientation. There were 4 possible combinations of target/distractors, randomly presented across target-present trials in an odd-man-out paradigm: GV or RH target among RV and GH distractors; RV or GH target among RH and GV distractors. The observers' task was to respond on target presence/absence. The number of target-present trials was equal to that of target-absent trials, selected randomly. 
Figure 1
 
The four possible combinations of target/distractors in target-present trials in the experiment. The distractors are RV and GH in the left pair of panels and RH and GV in the right pair. The target, from the left to right panel, is GV, RH, RV, and GH.
Figure 1
 
The four possible combinations of target/distractors in target-present trials in the experiment. The distractors are RV and GH in the left pair of panels and RH and GV in the right pair. The target, from the left to right panel, is GV, RH, RV, and GH.
Procedure
The flicker photometry method was used to determine the equiluminance values of red and green colors for each observer. Observers participated in a 1-h practice session, where they practiced on the conjunctive visual search task without cues. Initially, the experimenter explained the stimuli and the task, using stimuli of unlimited duration. The search stimuli were identical to those in the actual experiment except that, during the first half hour, the search stimuli were presented until response; in the second half hour, the search stimuli were presented for 300 ms as in the actual experiment. 
After the practice session, observers participated in the actual experiment, which consisted of 360 trials for each cue stimulus ISI value and for each cue type. Each trial started with a fixation phase, whose duration had a uniform random distribution between 500 and 1000 ms, and was followed by a brief cue lasting for 50 ms (see Figure 2). A cue stimulus ISI was interposed after the offset of the cue, and it varied from 0 to 700 ms. The search stimulus then appeared for 300 ms. The designated cue could refer to location, sensory color, word (“symbolic”) color, or orientation (see Figures 2 and 3). The reason we did not include symbolic orientation targets was that, in pilot studies, we did not observe an effect of sensory orientation cueing; since sensory cueing is known to be stronger than symbolic cueing, there was no reason to include symbolic orientation. The same type of precue (location, sensory color, symbolic color, or orientation) was used in a given block. Cues were either neutral or informative with 80% validity. A neutral cue did not provide any hint about the target, whereas an informative cue provided a valid hint about certain characteristic of the target in 80% of the trials and a misleading hint in 20% of the informative cue trials. In blocks where the cues referred to location, the neutral cue comprised two white dots displayed briefly on both sides of the fixation mark; an informative cue was a white dot presented on the left (or right) side of the fixation mark indicating that the target would appear on the left/right side 80% of the times (valid cue) and on the right/left side the remaining 20% of the times (invalid cue). In the sensory color cue condition, the neutral cue was a white color patch and the informative cues were either a red or a green color patch. In the symbolic color cue condition, the neutral cue was the text “50/50,” whereas the informative cues were either the word “red” or “green”; white alphanumeric symbols were used for all the text in the symbolic color cue condition. In the orientation cue condition, the neutral cue was a white square patch, and the informative cue was either a vertical elongated white bar or a horizontal elongated white bar. Observers were told that the neutral cues did not provide any useful information about the target characteristics; they were also told that the informative cues provided valid information about one of the target properties (location or color or orientation) in 80% of the trials that possessed informative cues. They were asked to respond to the presence/absence of the target as soon as possible and as accurately as possible. Auditory feedback was provided at the end of each trial to signify whether the response was correct or incorrect, using two different tones. Reaction time and accuracy were used as measures of performance. 
Figure 2
 
Schematic diagram for the sequence of events in a trial with a GV target. This is an example of the color cue condition. Either the green, or red, or white cue was present in any given trial. In this example, the green color cue would be a valid cue; the white color cue would be a neutral cue; and the red color cue would be an invalid cue.
Figure 2
 
Schematic diagram for the sequence of events in a trial with a GV target. This is an example of the color cue condition. Either the green, or red, or white cue was present in any given trial. In this example, the green color cue would be a valid cue; the white color cue would be a neutral cue; and the red color cue would be an invalid cue.
Figure 3
 
The three other cue type conditions, in addition to the sensory color condition of Figure 2: (top row) color symbolic, (middle row) location, and (bottom row) orientation cues. The informative cues are in the left and right columns, whereas the neutral cues are in the middle column.
Figure 3
 
The three other cue type conditions, in addition to the sensory color condition of Figure 2: (top row) color symbolic, (middle row) location, and (bottom row) orientation cues. The informative cues are in the left and right columns, whereas the neutral cues are in the middle column.
The 360 trials for each cue stimulus ISI value and for each cue type were randomly distributed as follows: The target was present in 180 trials, of which 120 were informative cue trials (80%, i.e., 96 were valid and 20%, i.e., 24 were invalid cues) and 60 were neutral cue trials. The target was absent in the other 180 trials, of which 120 were informative cue trials and 60 were neutral cue trials. 
Results
The reaction time and accuracy for each type of precue were subjected to 2-way repeated measures ANOVA tests independently. Although statistical tests were conducted on all effects, we only report the statistically significant effects. To correct for the violation of sphericity in some of the data, the Greenhouse–Geisser statistics were used and reported here. Each ANOVA test was conducted with the type 1 error at α = 0.05 level. Post-hoc pairwise comparisons were performed only for the significant effects indicated by the ANOVA tests. We believe that it would be too conservative to use a post-hoc test that controls the family-wise type 1 error rate. Therefore, the LSD tests were used for post-hoc tests. About 0.2% of the trials were removed from the analysis because the corresponding reaction time was longer than 3 s. Figures 4 and 5 depict the RT and accuracy, respectively, as a function of ISI using cue relevance as a parameter. Data for target-present and target-absent trials are shown in separate columns; data for the four different types of cue conditions are shown in separate rows. 
Figure 4
 
Average reaction time for the four types of cues. In the target-present trials (left column), the three different curves represent the following cue relevance conditions: (1) invalid (red symbols, solid lines), (2) neutral (blue symbols, dashed lines), and (3) valid (green symbols, solid lines). In the target-absent trials (right column), the two curves represent the following conditions: (1) neutral (blue symbols, dashed lines) and (2) informative cue (green symbols, solid lines). Error bars represent between-subjects standard error of the mean (SEM).
Figure 4
 
Average reaction time for the four types of cues. In the target-present trials (left column), the three different curves represent the following cue relevance conditions: (1) invalid (red symbols, solid lines), (2) neutral (blue symbols, dashed lines), and (3) valid (green symbols, solid lines). In the target-absent trials (right column), the two curves represent the following conditions: (1) neutral (blue symbols, dashed lines) and (2) informative cue (green symbols, solid lines). Error bars represent between-subjects standard error of the mean (SEM).
Figure 5
 
Average accuracy for the four types of cues. The notation is identical to that of Figure 4.
Figure 5
 
Average accuracy for the four types of cues. The notation is identical to that of Figure 4.
Reaction time
Color sensory cue
For trials in which a target was present, there were three types of “cue relevance” conditions: invalid, neutral, and valid cues. A 2-way repeated measures ANOVA on ISI and cue relevance for trials in which a target was present revealed that only the main effect of cue relevance was significant, F(1.03, 7.18) = 7.81, p = 0.03, partial η 2 = 0.53; the main effect of ISI and the interaction effect were not significant. Pairwise comparisons showed that search was faster when the cue was valid than when it was neutral (p = 0.01, Cohen's d = 0.98) or invalid (p = 0.03, Cohen's d = 1.06), but the difference of reaction time between invalid cue and neutral cue was only marginally significant (p = 0.08, Cohen's d = 0.56). The significant cue relevance effect demonstrates that visual search could be facilitated by presenting a color sensory cue prior to the search stimuli. This effect is consistent with the prediction of feature-based attention and agrees with the findings from earlier studies that used block design (Bacon & Egeth, 1997; Egeth et al., 1984; Olds & Fockler, 2004; Sobel et al., 2009). On the other hand, one would not expect the main effect of ISI and its interactions to be insignificant. As feature-based attention is found to manifest its effect relatively slowly (Hayden & Gallant, 2005; Liu et al., 2007), one would expect to observe a significant main effect of ISI or its interactions. 
When a target was absent, there were only two types of cue relevance conditions: color cue (red or green cue) vs. neutral cue (gray cue). It is surprising that the main effect of cue relevance was significant, F(1, 7) = 6.38, p = 0.04, partial η 2 = 0.48. Search was faster when the cue was a color cue than when it was a neutral cue. When a target was absent, observers would need to search through all the elements before they could make a response. An informative color cue did not provide any more helpful hint than a neutral cue. Therefore, one would expect the performance on these two conditions to be the same. We will discuss a possible explanation for this effect later. 
Color symbolic cue
One might expect a similar cue relevance main effect for symbolic cue as for sensory cue; however, a 2-way repeated measures ANOVA on ISI and cue relevance (three types: invalid, neutral, and valid) revealed that the main effect of cue relevance and the interaction effect were not significant (nevertheless, the cue relevance main effect on accuracy was significant and it will be reported in the next section). The main effect of ISI was significant, F(2.27, 15.88) = 6.08, p = 0.01, partial η 2 = 0.47. Pairwise comparisons showed that search was slower when the ISI was 0 ms than when it was 100 ms (p = 0.003, Cohen's d = 0.51), 400 ms (p = 0.009, Cohen's d = 0.58), or 700 ms (p = 0.013, Cohen's d = 0.77). Reaction times for ISI of 100 ms, 400 ms, and 700 ms were not significantly different from each other. 
Similar patterns were observed for reaction time when a target was absent, with only two cueing types: color symbolic cue (words “red” or “green”) vs. neutral cue (text “50/50”). Only the main effect of ISI was significant, F(1.46, 10.22) = 20.36, p = 0.001, partial η 2 = 0.74. Search was slower when ISI was 0 ms than when it was 100 ms (p = 0.008, Cohen's d = 1.06), 400 ms (p = 0.003, Cohen's d = 1.25), or 700 ms (p < 0.0005, Cohen's d = 2.48), and in turn, search was slower when ISI was 100 ms than when it was 400 ms (p = 0.010, Cohen's d = 0.41) or 700 ms (p = 0.004, Cohen's d = 1.41). Reaction times were not significantly different for ISI of 400 ms and 700 ms. 
Location cue
When a target was present, a 2-way repeated measures ANOVA on ISI and cue relevance revealed that only the main effect of cue relevance was significant, F(1.03, 7.23) = 6.44, p = 0.037, partial η 2 = 0.48. Pairwise comparisons showed that reaction time was shorter for a valid cue than for a neutral cue (p = 0.047, Cohen's d = 0.74), which in turn was shorter than for invalid cue (p = 0.038, Cohen's d = 0.71). The significant main effect of cue relevance is not surprising, as the location cueing effect has been reported in various tasks. However, it is interesting to find that, similar to color sensory cue, the main effect for ISI and its interaction were insignificant. As spatial attention and feature-based attention have been demonstrated to have different temporal characteristics (Hayden & Gallant, 2005; Liu et al., 2007), we would have expected to observe different temporal effects for color sensory cue and location cue. When a target was absent, there were only two different cue relevance levels: left or right location cue vs. neutral cue. There was an unexpected significant main effect of cue relevance, F(1, 7) = 6.74, p = 0.04, partial η 2 = 0.48. Reaction times were shorter when the cue was a neutral cue than when it was a left/right cue. We will discuss a possible explanation for this counterintuitive effect later. 
Orientation cue
When a target was present, a 2-way repeated measures ANOVA on ISI and cue relevance revealed that only the main effect of ISI was significant, F(2.29, 16.04) = 8.19, p = 0.003, partial η 2 = 0.54. Pairwise comparisons showed that RT for ISI of 700 ms was significantly shorter than those for other ISI (0 ms: p = 0.007, Cohen's d = 0.96; 100 ms: p = 0.004, Cohen's d = 1.49; and 400 ms: p = 0.022, Cohen's d = 0.75). RTs for ISI of 0 ms, 100 ms, and 400 ms were not significantly different from each other. None of the effects were significant when a target was absent. The insignificant cue relevance effect was consistent with the findings from earlier studies (Anderson et al., 2010; Olds & Fockler, 2004; Sobel et al., 2009). 
Accuracy
The results of accuracy for different cue types were also subjected to 2-way repeated measures ANOVA tests independently. The patterns were similar to those of reaction time. There do not appear to be accuracy and reaction time trade-offs. None of the effects on either reaction time or accuracy were significant for orientation cue; therefore, statistics are reported only for the other three types of cues in this section. 
Color sensory cue
When a target was present, a 2-way ANOVA on ISI and cue relevance as factors revealed that only the main effect of cue relevance was significant, F(1.03, 7.22) = 17.57, p = 0.004, partial η 2 = 0.72. Pairwise comparisons showed that observers responded more accurately when the cue was valid than when it was neutral (p = 0.004, Cohen's d = 1.30), and in turn, they responded more accurately with a neutral than with an invalid cue (p = 0.005, Cohen's d = 1.24). None of the effects on accuracy were significant when a target was absent. 
Color symbolic cue
When a target was present, a 2-way ANOVA on ISI and cue relevance as factors revealed that the main effect of cue relevance was significant, F(1.09, 7.61) = 9.48, p = 0.015, partial η 2 = 0.58. The interaction effect was marginally significant, F(3.20, 22.34) = 2.59, p = 0.075, partial η 2 = 0.27. Pairwise comparisons showed that accuracy was higher when the cue was valid than when it was neutral (p = 0.013, Cohen's d = 0.79), and this, in turn, was higher than when the cue was invalid (p = 0.027, Cohen's d = 1.03). None of the effects on accuracy were significant when a target was absent. 
Location cue
When a target was present, a 2-way ANOVA on ISI and cue relevance as factors revealed a significant main effect of cue relevance, F(1.30, 9.07) = 12.77, p = 0.004, partial η 2 = 0.64. Pairwise comparisons showed that the accuracy was marginally significantly higher when the cue was valid than when it was neutral (p = 0.06, Cohen's d = 0.45), and this, in turn, was significantly higher than when the cue was invalid (p = 0.008, Cohen's d = 0.90). None of the effects on accuracy were significant when a target was absent. 
Discussion
Psychophysical studies to investigate how attention or top-down control influences the visual search process have traditionally used the paradigm of block designs (Bacon & Egeth, 1997; Egeth et al., 1984; Farell & Pelli, 1993; Kaptein et al., 1995; Moore & Egeth, 1998; Olds & Fockler, 2004; Sobel et al., 2009), in which subjects searched for the same target in all the trials of a certain block. This paradigm does not allow one to study the temporal characteristics of the top-down control or how an instant hint about the target would affect the search process. A paradigm of cueing for a target on a trial-by-trial basis would serve these purposes. In some recent studies that used the trial-by-trial cueing paradigm (Vickery, King, & Jiang, 2005; Wolfe et al., 2004), the exact visual target was used as the cue to provide top-down guidance prior to the visual search stimuli. In such a paradigm, the maximum top-down information about the target was provided to observers. This mimics some of the situations in real life. For instance, when one is searching for a red mug in a room, he/she knows which exact mug he/she is looking for; therefore, he/she knows all the characteristics about the target prior to the search task. However, in some circumstances, we do not necessarily know what exact target we are looking for. Instead, we only know some partial information about the target. For example, we may search for a ball that we do not know the color of; in this case, we have partial information about the shape of the item we are searching for. Thus, several relevant questions can be raised: What kind of partial preknowledge is helpful for a visual search task? What is the time course of the guidance effects of preknowledge? Will the time course be different for different kinds of partial preknowledge? These are the questions that we investigated in the current study by using a trial-by-trial odd-man-out paradigm with different types of cues while varying the interstimulus interval. 
The cue relevance effects from different kinds of cues: Color, location, and orientation
Although the precue in the current study was not valid all the time (it had 80% validity), an efficient search strategy would still be to search first the subset that contained the cued feature. The precue was expected to guide the search by helping observers decide which subset to search first. Therefore, better performance should be observed for the valid than for the invalid cue condition. 
Our results showed that color (both sensory and symbolic) and location precues improved search performance. Valid cues led to better performance (shorter RT and/or higher accuracy), while misleading invalid cues resulted in performance cost (longer RT and/or lower accuracy). These results are consistent with some studies that used block design and other paradigms (e.g., Brawn & Snowden, 1999; Cheal & Gregory, 1997; Moore & Egeth, 1998; Posner et al., 1980; Shih & Sperling, 1996). Observers were able to utilize the color and location precues to limit their search to a subset of the search items. In addition, the current findings also suggest that observers could update the search subset quickly on a trial-by-trial basis by using the color or location precue presented at the beginning of each trial. 
By contrast, visual search performance did not benefit from orientation precues. There was no difference on either RT or accuracy among the valid, neutral, and invalid cue conditions. The orientation precue did not seem to help observers select a subset efficiently. Overall, the data indicate that search performance was sensitive to location- and color-based cueing but not to orientation-based cueing. One possible explanation for this difference can be based on the speculation that cueing initiates a segmentation phase that is followed by a search phase that is easier than the original search because it is conducted on a search space that is roughly half the size of the original search space. In this regard, it must be noted that there is a subtle difference between cueing for location and cueing for an attribute such as color or orientation. To make things concrete, let us consider a search for a conjunctive target among GH (green horizontal) and RV (red vertical) distractors; the target can be either GV or RH. Due to the random selection of element positions, we expect each hemifield to contain an equal number of GH and RV distractors. Thus, when there is a valid location cue, say left hemifield, the search area is limited to the elements in that hemifield; however, the observer still has to perform a conjunctive search for a GV or RH target among GH and RV distractors. By contrast, when there is a valid color cue, say red, the observer may first segment the red elements (half the total) in the entire visual field and then perform a singleton search task for the odd-oriented target among the red elements (RH target among RV distractors); similarly, when there is a valid orientation cue, say horizontal, the observer may first segment the horizontal elements and then perform a singleton search task for the odd-colored target among the horizontal elements (RH target among GH distractors). Thus, each precue can be thought of giving rise to two phases: a segmentation phase (either based on location or on attribute), followed by a search phase in a subset of elements of roughly half the size of the original search set. We limit our discussion here to the difference in performance sensitivity between color and orientation cueing (the comparison between color and location cueing is taken up in the last paragraph of this section). The task in the search phase that follows segmentation is of comparable difficulty for orientation and color cueing, both being singleton searches for a target that has maximal differences from distractors (red versus green and horizontal versus vertical). Thus, the difference in performance must be due mainly to the ability of the cued attribute to segment the search elements. Due to the brief presentation duration of the stimuli, in order to limit the search to the most relevant subset, the segmentation process must be accomplished quickly. We conclude that color cues are more efficient in segmenting the elements than orientation cues, if indeed such a—speculative—segmentation process is initiated by cueing. 
These findings demonstrate that, during visual search, not all kinds of partial information about the target are equally beneficial for the task. Other studies have also shown that, among different kinds of features, color has an advantage over other features, such as size, orientation, and motion (Anderson et al., 2010; Moore & Egeth, 1998; Olds & Fockler, 2004; Sobel et al., 2009; Zhang et al., 2010). Sobel et al. (2009) reported that color feature preview helped visual search for a wide variety of stimulus configurations, such as stimuli with various display sizes, various densities, and various ratios of the numbers of the two distractor types; by contrast, orientation feature preview only facilitated the search when one type of distractors significantly outnumbered the other type of distractors. 
Some of the visual search and attention models have proposed that the visual system could use the preknowledge about the target to weigh different features, thereby allocating more attention resources to more relevant features (Pollmann et al., 2007; Pollmann, Weidner, Müller, & von Cramon, 2006; Wolfe, 1994). The current results confirm those of other studies that some types of preknowledge do not always provide an advantage for a more efficient top-down guidance to search. It could be that some features (e.g., color) have an intrinsic cueing advantage over other features (e.g., orientation). It could also be that the interactions between the visual attention mechanism and different feature channels vary; therefore, it may be easier for the visual system to deploy attention to some features than to other features. The top-down guidance effect may also depend on how effectively the feature is used for segmentation processes in order to limit the visual search to a subset of the stimuli. An interesting question for future research is why exactly different features have different effects in guiding the visual search. 
In the present study, we also investigated whether a color sensory cue and a color symbolic cue affect the performance of a conjunctive search differently. In the study by Wolfe et al. (2004), in which the exact identity of the target was cued on each trial, they reported that pictorial cues were more effective than symbolic cues. Using random polygons and real-world stimuli, Vickery et al. (2005) also showed that observers were faster in search speed when the cue was a pictorial cue than when it was a symbolic cue. In the current study, we obtained evidence that providing partial feature information in a symbolic format was not as helpful as providing it in a sensory format: Namely, the cue relevance effects were observed on both the reaction time measure and the accuracy measure for color sensory cues, whereas, for color symbolic cues, the cue relevance effect was observed only on the accuracy measure but not the reaction time measure. 
For target-absent trials, results indicate an unexpected pattern: reaction times were shorter for neutral location cues than for informative location cues, whereas it was the opposite for color cues. We hypothesize that for both location and color cues, in target-absent trials, observers first searched one of the two subsets of elements, and after not seeing a target, they switched their attention to the other subset. For location cues, when an informative location cue was present, the switching of attention was harder and it took slightly longer for observers, because their attention had been involuntarily strongly engaged to the cued hemifield. On the other hand, with a neutral location cue, observers' attention was not involuntarily biased toward either hemifield, thereby taking less time to switch attention from one hemifield to the other. As a result, reaction times were shorter for a neutral location cue than for an informative location cue when the target was absent. By contrast, for color cues, switching attention between colors may not require different effort whether the cue was informative or neutral; unlike location cueing, a color cue may not attract attention involuntarily. However, color-based segmentation may require more effort than location-based segmentation. Therefore, an informative color cue may be more helpful than a neutral color cue in segmenting the elements into two subsets, which results in shorter reaction time for an informative cue. 
Time course of the precueing effect
Studies that used the cueing paradigm have shown that spatial and feature-based attention exhibited different temporal characteristics. Spatial exogenous attention exerts its effect in a very early stage, starting within 100 ms, while it takes about 200–300 ms for endogenous spatial attention to be effective (Cheal & Lyon, 1991; Posner et al., 1980). Feature-based attention is even slower than endogenous spatial attention (Hayden & Gallant, 2005; Liu et al., 2007). For instance, Liu et al. (2007) found that feature-based attention took longer than 300 ms to exert its effect. 
In studies to investigate the temporal characteristics of the cueing effect on visual search tasks, Wolfe et al. (2004) reported that both an exact pictorial cue and a word cue led to faster RT than did an uninformative cue when the cue was shown before the stimuli with a stimulus onset asynchrony (SOA) of 50 ms; the effect reached its maximum magnitude with an SOA of 200 and 400 ms. Using random polygons and 3D objects as stimuli, Vickery et al. (2005) also showed that it took 200 ms for an exact target cue to facilitate observers' search performance. The time course of the top-down guidance on visual search seems to depend on the cue type, the stimulus type, as well as the task. 
In the present study, we investigated the time course of the cueing effects when only one feature dimension of the target was cued. Both color sensory and location cues exhibited their effects in directing visual search immediately after the cue offset (0-ms ISI or, equivalently, 50-ms SOA). We believe that, in the current search task, location cues involve both exogenous and endogenous spatial attention. Therefore, it is not surprising that the location cueing effects manifest themselves shortly after the cue onset. On the other hand, it is interesting to observe that the color sensory cues showed similar temporal characteristics as location cues. These results, together with the findings from previous studies investigating the time course of feature-based attention (Busse, Katzner, & Treue, 2006; Hayden & Gallant, 2005; Liu et al., 2007), imply that the time course of feature-based attention depends on specific stimuli/features involved in the task. For motion stimuli, it takes longer for feature-based attention than for spatial attention to modulate task performance (Liu et al., 2007), whereas in tasks that involve color as a feature, the modulation effect from feature-based attention is exhibited earlier. In addition, the effects from color sensory and location precues remained constant across all tested ISI (0–700 ms) in the current study. By contrast, orientation precue does not show guidance effect across all ISI tested in current study. It is possible that it may take longer for orientation cue to exert its effect. It is also an interesting question for further research whether the cueing effects will diminish with longer ISI for color sensory and location cues. 
It is worth noting that the color symbolic cue exhibited a different temporal characteristic in guiding searches. In trials where a target was present, the interaction effect on accuracy between cue relevance and ISI was marginally significant. The results showed a tendency for the difference on accuracy between the neutral cue and the invalid cue conditions to vary along with ISI. The accuracy was not significantly different for the neutral and the invalid cue conditions if the cue was presented immediately before the search stimuli; however, the accuracy dropped significantly with increasing ISI (in the ISI range of 100 to 700 ms) for the invalid cue condition. Observers were slower in trials with short ISI than in those with long ISI. These results may indicate that, compared to a color sensory cue or a location cue, it takes longer for a color symbolic cue to affect the visual search performance. One plausible explanation is that the additional RT is due to processing time for reading the text and forming color imagery to facilitate the search. Similarly, Wolfe et al. (2004) also reported that it took longer for a word cue than for a pictorial cue to exert its full effect in assisting visual search tasks. In addition, for color symbolic cues, the cue relevance effect was due to a performance cost from an invalid cue, instead of a performance gain from a valid cue. 
In the present study, performance reflected the combined effect of two putative phases: a segmentation process followed by an easier search task among roughly half the original search elements. This makes it impossible to separate the individual contribution of each phase under the present experimental paradigm. The fact that the overall performances in color- and location-based cueing are roughly equal means that location-based segmentation is more efficient than color-based segmentation, given that the search task is harder for location cueing (a conjunctive search) than for color cueing (a singleton search). We acknowledge that the above scheme of splitting conjunctive visual search into two phases is quite speculative. Establishing the existence of these two phases and studying the spatial and temporal characteristics of each phase separately remains a challenge for future studies. 
Acknowledgments
The authors wish to thank Dr. Xiaotao Su for technical assistance, as well as Sue Cosentino and Jo'Ann Meli for administrative support. 
Commercial relationships: none. 
Corresponding author: Xiaohua Zhuang. 
Address: 940 E. 57th Street, BPSB 125, Chicago, IL 60637, USA. 
References
Andersen S. K. Hillyard S. A. Müller M. M. (2008). Attention facilitates multiple stimulus features in parallel in human visual cortex. Current Biology, 18, 1006–1009. [PubMed] [CrossRef] [PubMed]
Anderson G. M. Heinke D. Humphreys G. W. (2010). Featural guidance in conjunction search: The contrast between orientation and color. Journal of Experimental Psychology: Human Perception and Performance, 36, 1108–1127. [PubMed] [CrossRef] [PubMed]
Bacon W. F. Egeth H. E. (1997). Goal-directed guidance of attention: Evidence from conjunctive visual search. Journal of Experimental Psychology: Human Perception and Performance, 23, 948–961. [PubMed] [CrossRef] [PubMed]
Bichot N. P. Rossi A. F. Desimone R. (2005). Parallel and serial neural mechanisms for visual search in macaque area V4. Science, 308, 529–534. [PubMed] [CrossRef] [PubMed]
Brawn P. Snowden R. J. (1999). Can one pay attention to a particular color? Perception & Psychophysics, 61, 860–873. [PubMed] [CrossRef] [PubMed]
Busse L. Katzner S. Treue S. (2006). Spatial and feature-based effects of exogenous cueing on visual motion processing. Vision Research, 46, 2019–2027. [PubMed] [CrossRef] [PubMed]
Cave K. R. Wolfe J. M. (1990). Modeling the role of parallel processing in visual search. Cognitive Psychology, 22, 225–271. [PubMed] [CrossRef] [PubMed]
Cheal M. Lyon D. R. (1991). Importance of precue location in directing attention. Acta Psychologica, 76, 201–211. [PubMed] [CrossRef] [PubMed]
Cheal M. L. Gregory M. (1997). Evidence of limited capacity and noise reduction with single-element displays in the location-cueing paradigm. Journal of Experimental Psychology: Human Perception and Performance, 23, 51–71. [PubMed] [CrossRef] [PubMed]
Dosher B. A. Han S. Lu Z. L. (2010). Information-limited parallel processing in difficult heterogeneous covert visual search. Journal of Experimental Psychology: Human Perception and Performance, 36, 1128–1144. [PubMed] [CrossRef] [PubMed]
Egeth H. E. Virzi R. A. Garbart H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance, 10, 32–39. [PubMed] [CrossRef] [PubMed]
Egner T. Monti J. M. P. Trittschuh E. H. Wieneke C. A. Hirsch J. Mesulam M. M. (2008). Neural integration of top-down spatial and feature-based information in visual search. Journal of Neuroscience, 28, 6141–6151. [PubMed] [CrossRef] [PubMed]
Farell B. Pelli D. G. (1993). Can we attend to large and small at the same time? Vision Research, 18, 2757–2772. [PubMed] [CrossRef]
Geyer T. Müller H. J. Krummenacher J. (2006). Cross-trial priming in visual search for singleton conjunction targets: Role of repeated target and distractor features. Perception & Psychophysics, 68, 736–749. [PubMed] [CrossRef] [PubMed]
Geyer T. Shi Z. Müller H. J. (2010). Contextual cueing in multiconjunction visual search is dependent on color- and configuration-based intertrial contingencies. Journal of Experimental Psychology: Human Perception and Performance, 36, 515–532. [PubMed] [CrossRef] [PubMed]
Geyer T. Zehetleitner M. Müller H. J. (2010). Contextual cueing of pop-out visual search: When context guides the deployment of attention. Journal of Vision, 10(5):20, 1–11, http://www.journalofvision.org/content/10/5/20, doi:10.1167/10.5.20. [PubMed] [Article] [CrossRef] [PubMed]
Hayden B. Y. Gallant J. L. (2005). Time course of attention reveals different mechanisms for spatial and feature-based attention in area V4. Neuron, 47, 637–643. [PubMed] [CrossRef] [PubMed]
Kaptein N. A. Theeuwes J. Van der Heljden A. H. C. (1995). Search for a conjunctively defined target can be selectively limited to a color-defined subset of elements. Journal of Experimental Psychology: Human Perception and Performance, 21, 1053–1069. [CrossRef]
Khurana B. Kowler E. (1987). Shared attentional control of smooth eye movement and perception. Vision Research, 27, 1603–1618. [PubMed] [CrossRef] [PubMed]
Kristjánsson A. (2006). Simultaneous priming along multiple feature dimensions in a visual search task. Vision Research, 46, 2554–2570. [PubMed] [CrossRef] [PubMed]
Kristjánsson A. Wang D. Nakayama K. (2002). The role of priming in conjunctive visual search. Cognition, 85, 37–52. [PubMed] [CrossRef] [PubMed]
Liu T. Stevens S. T. Carrasco M. (2007). Comparing the time course and efficacy of spatial and feature-based attention. Vision Research, 47, 108–113. [PubMed] [CrossRef] [PubMed]
Lu Z. L. Sperling G. (1996). Three systems for visual motion perception. Current Directions in Psychological Science, 5, 44–53. [CrossRef]
Luck S. J. Chelazzi L. Hillyard S. A. Desimone R. (1997). Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. Journal of Neurophysiology, 77, 24–42. [PubMed] [PubMed]
Maljkovic V. Nakayama K. (1994). Priming of pop-out: I. Role of features. Memory & Cognition, 22, 657–672. [PubMed] [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (1996). Priming of pop-out: II. Role of position. Perception & Psychophysics, 58, 977–991. [PubMed] [CrossRef] [PubMed]
Martinez-Trujillo J. C. Treue S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Current Biology, 14, 744–751. [PubMed] [CrossRef] [PubMed]
McAdams C. J. Maunsell J. H. R. (2000). Attention to both space and feature modulates neuronal responses in macaque area V4. American Physiological Society, 83, 1751–1755. [PubMed]
Melcher D. Papathomas T. V. Vidnyanszky Z. (2005). Implicit attentional selection of bound visual features. Neuron, 46, 723–729. [PubMed] [CrossRef] [PubMed]
Moore C. M. Egeth H. (1998). How does feature-based attention affect visual processing? Journal of Experimental Psychology: Human Perception and Performance, 24, 1296–1310. [PubMed] [CrossRef] [PubMed]
Olds E. S. Fockler K. A. (2004). Does previewing one stimulus feature help conjunction search? Perception, 33, 195–216. [PubMed] [CrossRef] [PubMed]
Olds E. S. Graham T. J. Jones J. A. (2009). Feature head-start: Conjunction search following progressive feature disclosure. Vision Research, 49, 1428–1447. [PubMed] [CrossRef] [PubMed]
Pollmann S. Mahn K. Reimann B. Weidner R. Tittgemeyer M. Preul C. et al. (2007). Selective visual dimension weighting deficit after left lateral frontopolar lesions. Journal of Cognitive Neuroscience, 19, 365–375. [PubMed] [CrossRef] [PubMed]
Pollmann S. Weidner R. Müller H. J. von Cramon D. Y. (2006). Neural correlates of visual dimension weighting. Visual Cognition, 14, 877–897. [CrossRef]
Posner M. I. Snyder C. R. Davidson B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology, 109, 160–174. [CrossRef] [PubMed]
Proulx M. J. (2007). Bottom-up guidance in visual search for conjunctions. Journal of Experimental Psychology: Human Perception and Performance, 33, 48–56. [PubMed] [CrossRef] [PubMed]
Saenz M. Buracas G. T. Boynton G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5, 631–632. [PubMed] [CrossRef] [PubMed]
Saenz M. Buracas G. T. Boynton G. M. (2003). Global feature-based attention for motion and color. Vision Research, 43, 629–637. [PubMed] [CrossRef] [PubMed]
Shih S. I. Sperling G. (1996). Is there feature-based attentional selection in visual search? Journal of Experimental Psychology: Human Perception and Performance, 22, 758–779. [PubMed] [CrossRef] [PubMed]
Sobel K. V. Pickard M. D. Acklin W. T. (2009). Using feature preview to investigate the roles of top-down and bottom-up processing in conjunction search. Acta Psychologica, 132, 22–30. [PubMed] [CrossRef] [PubMed]
Thornton T. L. Gilden D. L. (2007). Parallel and serial processes in visual search. Psychological Review, 114, 71–103. [PubMed] [CrossRef] [PubMed]
Treisman A. Gelade G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97–136. [PubMed] [CrossRef] [PubMed]
Treisman A. Vieira A. Hayes A. (1992). Automaticity and preattentive processing. American Journal of Psychology, 105, 341–362. [PubMed] [CrossRef] [PubMed]
Treue S. Martinez-Trujillo J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399, 575–579. [PubMed] [CrossRef] [PubMed]
Vickery T. J. King L. W. Jiang Y. H. (2005). Setting up the target template in visual search. Journal of Vision, 5(1):8, 81–92, http://www.journalofvision.org/content/5/1/8, doi:10.1167/5.1.8. [PubMed] [Article] [CrossRef]
Weidner R. Krummenacher J. Reimann B. Müller H. J. Fink G. R. (2009). Sources of top-down control in visual search. Journal of Cognitive Neuroscience, 21, 2100–2113. [PubMed] [CrossRef] [PubMed]
Wolfe J. M. (1994). Guided Search 20: A revised model of visual search. Psychonomic Bulletin and Review, 1, 202–238. [CrossRef] [PubMed]
Wolfe J. M. (1998). Visual memory: What do you know about what you saw? Current Biology, 8, 303–304. [PubMed] [CrossRef]
Wolfe J. M. Butcher S. J. Lee C. Hyle M. (2003). Changing your mind: On the contributions of top-down and bottom-up guidance in visual search for feature singletons. Journal of Experimental Psychology: Human Perception and Performance, 29, 483–502. [PubMed] [CrossRef] [PubMed]
Wolfe J. M. Cave K. R. Franzel S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433. [PubMed] [CrossRef] [PubMed]
Wolfe J. Horowitz T. Kenner N. M. Hyle M. Vasan N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44, 1411–1426. [PubMed] [CrossRef] [PubMed]
Zhang B. Zhang J. X. Kong L. Huang S. Yue Z. Wang S. (2010). Guidance of visual attention from working memory contents depends on stimulus attributes. Neuroscience Letter, 486, 202–206. [PubMed] [CrossRef]
Zohary E. Hochstein S. (1989). How serial is serial processing in vision? Perception, 18, 191–200. [PubMed] [CrossRef] [PubMed]
Figure 1
 
The four possible combinations of target/distractors in target-present trials in the experiment. The distractors are RV and GH in the left pair of panels and RH and GV in the right pair. The target, from the left to right panel, is GV, RH, RV, and GH.
Figure 1
 
The four possible combinations of target/distractors in target-present trials in the experiment. The distractors are RV and GH in the left pair of panels and RH and GV in the right pair. The target, from the left to right panel, is GV, RH, RV, and GH.
Figure 2
 
Schematic diagram for the sequence of events in a trial with a GV target. This is an example of the color cue condition. Either the green, or red, or white cue was present in any given trial. In this example, the green color cue would be a valid cue; the white color cue would be a neutral cue; and the red color cue would be an invalid cue.
Figure 2
 
Schematic diagram for the sequence of events in a trial with a GV target. This is an example of the color cue condition. Either the green, or red, or white cue was present in any given trial. In this example, the green color cue would be a valid cue; the white color cue would be a neutral cue; and the red color cue would be an invalid cue.
Figure 3
 
The three other cue type conditions, in addition to the sensory color condition of Figure 2: (top row) color symbolic, (middle row) location, and (bottom row) orientation cues. The informative cues are in the left and right columns, whereas the neutral cues are in the middle column.
Figure 3
 
The three other cue type conditions, in addition to the sensory color condition of Figure 2: (top row) color symbolic, (middle row) location, and (bottom row) orientation cues. The informative cues are in the left and right columns, whereas the neutral cues are in the middle column.
Figure 4
 
Average reaction time for the four types of cues. In the target-present trials (left column), the three different curves represent the following cue relevance conditions: (1) invalid (red symbols, solid lines), (2) neutral (blue symbols, dashed lines), and (3) valid (green symbols, solid lines). In the target-absent trials (right column), the two curves represent the following conditions: (1) neutral (blue symbols, dashed lines) and (2) informative cue (green symbols, solid lines). Error bars represent between-subjects standard error of the mean (SEM).
Figure 4
 
Average reaction time for the four types of cues. In the target-present trials (left column), the three different curves represent the following cue relevance conditions: (1) invalid (red symbols, solid lines), (2) neutral (blue symbols, dashed lines), and (3) valid (green symbols, solid lines). In the target-absent trials (right column), the two curves represent the following conditions: (1) neutral (blue symbols, dashed lines) and (2) informative cue (green symbols, solid lines). Error bars represent between-subjects standard error of the mean (SEM).
Figure 5
 
Average accuracy for the four types of cues. The notation is identical to that of Figure 4.
Figure 5
 
Average accuracy for the four types of cues. The notation is identical to that of Figure 4.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×