Free
Research Article  |   January 2009
The specificity of the search template
Author Affiliations
Journal of Vision January 2009, Vol.9, 34. doi:https://doi.org/10.1167/9.1.34
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mary J. Bravo, Hany Farid; The specificity of the search template. Journal of Vision 2009;9(1):34. https://doi.org/10.1167/9.1.34.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When searching for a target object, observers use an internal representation of the target's appearance as a search template. This study used naturalistic stimuli to examine the specificity of this template. Observers first learned several name-image pairs; they then participated in a search experiment in which the names served as cues and the images served as targets. To test whether the observers searched for the targets using an exact image template, we included targets that were transformations of the studied image and targets that belonged to the same subordinate-level category as the studied image. The same stimuli were also used in a search experiment involving image cues. The name cue and image cue experiments produced different patterns of results. Unlike image cues, name cues produced similar benefits for transformations of the studied images as for the studied images themselves. Also unlike image cues, names cues produced no benefit for members of the same subordinate-level category as the studied image. These results suggest that when observers are trained on an image, they develop a search template that is relatively specific for the image but still tolerant to changes in scale and orientation.

Introduction
Blue jays looking for moths, pigeons looking for seeds, and humans looking for colored rectangles all show remarkable similarities in their visual search behavior. One similarity is that search is fastest when the search target does not vary over time (Blough, 2002; Bond & Kamil, 2002; Kristjánsson, Wang, & Nakayama, 2002). Because the target is predictable, observers can selectively attend to a subset of the stimulus. This selection process is thought to involve the creation of a search template that specifies the target's appearance (Bundesen, 1990; Duncan & Humphreys, 1989; Hamker, 2005; Tinbergen, 1960; Vickery, King, & Jiang, 2005). This template is then used to bias the processing of sensory information in favor of stimuli that resemble the target (Desimone & Duncan, 1995; Usher & Niebur, 1996). While there is empirical support for the existence of a search template (Chelazzi, Duncan, Miller, & Desimone, 1998; Chelazzi, Miller, Duncan, & Desimone, 1993), the characteristics of this template are unknown. 
One obstacle to characterizing the search template has been that many conditions that elicit selective attention also elicit perceptual priming (Kristjánsson et al., 2002; Maljkovic & Nakayama, 1994). Perceptual priming is the enhanced processing of a stimulus that has been encountered repeatedly, especially if the stimulus was behaviorally relevant (Wiggs & Martin, 1998). A key difference between selective attention and perceptual priming is that selective attention depends on the observer's expectations, while perceptual priming operates independently of these expectations. Although these two processes are clearly distinct, they are typically confounded in search experiments. In most experiments, observers either search for the same target across trials or they preview the target before the onset of the search array (e.g., Vickery et al., 2005; Zelinsky, Rao, Hayhoe, & Ballard, 1997). Because these methods use targets that are both predictable and repeated, they may invoke both selective attention and perceptual priming. In this study, we isolated the effect of selective attention by using targets that repeated infrequently and by prompting observers with name cues rather than image cues. 
Isolating the effect of selective attention allowed us to examine the nature of the search template. Previous researchers have usually characterized the search template as either an exact image or as a set of basic features (Bourke & Duncan, 2005; Najemnik & Geisler, 2005; Rajashekar, Cormack, & Bovik, 2004; Usher & Niebur, 1996; cf. Rao, Zelinsky, Hayhoe, & Ballard, 2002). Such templates are not unreasonable for very simple stimuli, but they may be poorly suited for real objects. Basic features like ‘vertical” or ‘curved’ are too general to be effective search templates because these features are ubiquitous in natural images. In contrast, exact images are too specific to be effective search templates because they cannot tolerate the variations that arise when an object is seen from different viewpoints. The search templates that are used in everyday vision likely lie between the two extremes showing both specificity for particular objects or categories of objects and tolerance to viewpoint variation. 
The goal of this study was to examine the specificity of the search templates used in everyday search. Although exact images are unlikely candidates because of their extreme specificity, these templates are well defined and so easily tested, and we used them as our starting point. We trained observers to associate target names with specific images so that when they were later cued with a name in a search experiment, they could form, at least in theory, an exact image template. To test whether observers actually use exact templates, we measured search performance for targets that varied in their similarity to the studied image. 
The stimuli we chose for the experiment were photographic composites of coral reef scenes and the search targets were tropical fish ( Figure 1). Each scene contained at most one fish, and the observer's task was always simply to decide whether a fish (any fish) was present. The targets were drawn from ten fish species with highly distinctive colors and markings. We chose these stimuli for two reasons. First, because coral reef scenes are relatively unstructured, they place no constraints on the location of the fish targets. And second, because the distinctive patterns on the fish act as disruptive camouflage (Stevens, Cuthill, Windsor, & Walker, 2006), they conceal the fish's shapes. Assuming that shape is the key attribute for categorizing an object as a fish, these patterns make it difficult to search for fish in general. At the same time, the distinctiveness of the patterns makes it easy to search for a specific fish. Maximizing the difference between search for the basic-level category and search for an individual was important because our effect size was constrained by the difference between searching with and without cues to the target fish's identity. 
Figure 1
 
Examples of the coral reef stimuli.
Figure 1
 
Examples of the coral reef stimuli.
Before participating in the experiment, the observers learned to associate the names of five species with an image from each species. During the experiment, the observers were cued with one of the five studied names before the onset of the reef scene. On some trials the target in the scene was the image that observers had learned to associate with the species name, but on other trials the target was only similar to this image. To examine the specificity of the observer's search template we used three levels of target variation:
  1.  
    no variation—the target was identical to the studied image,
  2.  
    2D viewpoint variation—the target was a rotated, flipped and scaled version of the studied image, and
  3.  
    subordinate level variation—the target was from the same species as the studied image.
As controls, we also included a nonspecific cue condition (the cue was simply the word “fish”) as well as target fish from species that had not been studied.
On the surface, the interpretation of this experiment seems fairly straightforward: If the name cues prompt observers to use an exact image template to search for the target, then we expect a cue advantage only for the targets that are identical to the studied image. If the name cues prompt observers to use a less specific template that is tolerant to viewpoint and exemplar variation, then we expect the name cue to benefit all three conditions relative to the nonspecific cue condition. This interpretation is complicated, however, by the uncontrolled nature of the stimuli. We elected to use natural objects and natural categories because we wanted to study the type of problem that the visual system is designed to solve. But because it is unclear how to measure the visual similarity of natural objects, we could not quantify the differences between our conditions. If, for example, the results showed no difference between the same-image and same-species conditions, this might reveal something about the nature of the search template, or it might simply indicate that there is little variation within species. The interpretation of these results would be clearer if we had another measure of the perceptual similarity of our stimuli. 
To obtain a second measure of the perceptual similarity of our stimuli, we conducted a preliminary experiment with image cues. The image cue was either the target image, a transformed version of the target image, or an image of a fish from the same species as the target image. Previous research has shown that the best image cue is an exact match of the target: If the cue and target differ in size, orientation, or if they are different exemplars of the same type, then the cue is less effective (Vickery et al., 2005; Wolfe, Horowitz, Kenner, Hyle, & Vasan, 2004). Based on these results, we expected that the image cue experiment would provide a sensitive measure of the similarity of our stimuli. 
Experiment 1: Image cues
This preliminary experiment employed a method that is common in visual search experiments: Before the presentation of each search stimulus, the observers were shown an image of the search target. Because the target was both repeated and predictable, this experiment confounds the effects of perceptual priming and selective attention. But even though the results cannot be attributed to a particular process, they do provide a measure of the perceptual similarity of our stimuli in the context of our search task. In this sense, this first experiment serves as a control for our main experiment. 
Methods
Stimuli
The stimuli were computer-generated composites of photographs downloaded from flickr.com, fishbase.org and other photo-sharing websites. A unique composite was created for each trial. The composites consisted of 11 sea creatures superimposed on an extended background. The search stimulus backgrounds were drawn from a set of fifty reef images, 1064 × 768 pixels in size. The sea creatures were drawn from a set of 50 images that included anemones, hard and soft corals, sea stars, sea clams, and sea slugs. These sea creatures were positioned randomly within the image, subject only to an overlap constraint that ensured that each creature was at least 75% visible. In half of the images, the last sea creature added was a fish. The fish were drawn from a set of 100 images representing ten species belonging to the tang, butterflyfish and anglefish families ( Figure 2). The fish and other sea creatures were scaled to have an area of 50,000 pixels. Before they were added to the display, the fish images were transformed by a rescaling, a rotation and, in half the displays, a reflection across the vertical midline. The magnitude of the rescaling ranged from 0.75 to 1.25 and the magnitude of the rotation ranged from −45 to +45 degrees. (The rotation angle was limited so that the fish would not appear to be illuminated from below or to be swimming upside down.) 
Figure 2
 
The targets were drawn from 100 fish images representing ten fish species. One image from each of nine species is shown (top), along with six images of the tenth species (bottom). The names assigned to the ten species were (left to right, top to bottom) shoulder, zebra, emperor, copper, blue, pyramid, lined, thread, powder, and saddle.
Figure 2
 
The targets were drawn from 100 fish images representing ten fish species. One image from each of nine species is shown (top), along with six images of the tenth species (bottom). The names assigned to the ten species were (left to right, top to bottom) shoulder, zebra, emperor, copper, blue, pyramid, lined, thread, powder, and saddle.
On every trial in the experiment, a target was randomly selected from the 100 fish images and randomly transformed. The trials differed only in the relationship between the cue and the target:
  1.  
    In the “identical” condition, the cue and the target were identical.
  2.  
    In the “transformed image” condition, the cue and the target differed by an image transformation.
  3.  
    In the “same species” condition, the cue and the target were different fish from the same species.
  4. 4.  
    In the “non-specific cue” condition the cue was simply the word “fish.”
Procedure
The experiment was run on a 24-inch iMac, using MatLab software and PsychToolbox routines (Brainard, 1997; Pelli, 1997). It consisted of 14 blocks of 40 trials each, with the first block discarded as practice. Observers initiated the first trial. After a 1 sec delay, a cue was presented for 200 msec, followed by a 200 msec blank interval. The search stimulus was then presented until the observer responded by pressing a key on the keyboard. Observers responded “f” if they judged that a fish was present and “j” if they judged that no fish was present. Auditory feedback was given after incorrect responses, and the next trial followed a 1.5 second delay. A fish was present on half of the trials, and these present trials were equally divided among the four conditions: identical image, transformed image, same species and nonspecific cue. The conditions were intermixed within blocks. 
Observers
Twenty observers were recruited from the Introduction to Psychology subject pool at Rutgers-Camden. These observers were evenly divided between the two experiments. All observers reported having normal acuity and normal color perception. 
Results and discussion
Figure 3, left, shows the reaction time results for the present trials in the image cue experiment (accuracy was similar across conditions and always exceeded 95%). Consistent with previous studies, the results showed a gradual increase in search times as the similarity between the cue and target decreased. The cue that produced the fastest search times was the image of the target itself, while cues that were transformations of the target produced slightly slower search times. Cues that were from the same species as the target produced still slower search times, and the control condition with nonspecific cues produced the slowest search times of all. Paired t-tests indicated that each increase was significant ( t(9) = 2.64, 4.42 and 4.56; p < 0.05, 0.01, and 0.01, respectively). The magnitude of the cue advantage can be calculated by subtracting the search times for the cued conditions from the search times for the nonspecific cue baseline, as shown in Figure 3 right. 
Figure 3
 
Results of the image cue experiment. (Left) Search times for fish targets preceded by an image cue that was either identical to the target, a rotated and scaled version of the target, a member of the same species as the target, or simply the word “fish”. (Right) The difference in search times between the nonspecific cue condition and each of the image cue conditions (error bars = +1 SEM).
Figure 3
 
Results of the image cue experiment. (Left) Search times for fish targets preceded by an image cue that was either identical to the target, a rotated and scaled version of the target, a member of the same species as the target, or simply the word “fish”. (Right) The difference in search times between the nonspecific cue condition and each of the image cue conditions (error bars = +1 SEM).
As noted earlier, the results from this experiment likely reflect both selective attention and priming. The graded effect seen in Figure 3 is common in the priming literature where the magnitude of priming is thought to reflect the degree of overlap between the representations of the cue and the target at one or more levels of visual processing (Wiggs & Martin, 1998). These results suggest that at some level(s) of processing there is overlap between the representations of images that are transformations of one another and, to a lesser extent, between the representations of images that depict members of the same subordinate level category. The data in Figure 3 right can be taken as a measure of this overlap. For our purposes, this figure shows that the three levels of image variation (identical, transformed image, same species) correspond to three distinct levels of perceptual similarity. 
Half of the trials in this experiment were absent trials, and on average observers responded faster to these trials when they were preceded by an image cue (1,245 msec, SEM = 101 msec) than when they were preceded by an nonspecific cue (1,311 msec, SEM = 99 msec) ( t(9) = 3.66, p < .0026). In other words, observers were faster to decide that a particular fish was absent than to decide that all fish were absent. This result is not easily explained by priming, and so it suggests that selective attention played a role in this experiment. 
Experiment 2: Name cues
To isolate the process of selective attention and examine the specificity of the search template, we used the coral reef stimuli in an experiment with name cues. Observers first learned to associate the names of five fish species with an image of a fish from each species. Then, during the search experiment, the observers were cued with one of these names prior to the onset of the search stimulus. The target in the search stimulus was related to the image associated with the name cue in one of several ways: The target was either identical to the associated image, a transformed version of the associated image, or a member of the same species as the associated image. 
Methods
Training
One fish image was selected randomly from each of the ten fish species. These ten fish images were divided into two sets, and half of the observers were trained on each set. During training, the observers learned to associate each fish image with a word that was a shortened version of the fish's common species name (see Figure 2 for the names). The training exercises involved several tasks, but the structure of each task was the same: a name cue was presented for 200 msec, then a blank screen was presented for 800 msec, and then the stimulus was presented. The training session lasted 50 minutes and involved 16 blocks of 40 trials for a total of 66 correct associations of each name-image pair. 
The tasks involved in the 16 training blocks were as follows: In block one the observers passively viewed the name-image pairs. In blocks two and three, the observers were tested on the name-image pairs, which were incorrect on 20% of the trials. The observers were instructed to respond ‘f’ to correct pairs, ‘j’ to incorrect pairs. In blocks four through eight, the observers searched for the fish, responding ‘f’ for present, ‘j’ for absent. A fish was present on half the trials and the five fish were run in separate blocks. In blocks nine and ten, the observers were again tested on the name-image pairs. In blocks eleven through fourteen, the observers searched for the fish, but this time the five species were intermixed. And finally, in blocks fifteen and sixteen, the observers were again tested on the name-image pairs. Performance levels were high throughout training: Every observer completed every task with at least 90% accuracy. 
Testing
One or two days after their training session, the observers returned for the search experiment. As with the preceding experiment, the observers' task was simply to determine if a fish was present in the coral reef scenes. Before some scenes, observers were given a cue to the fish's identity that could potentially facilitate their search. In the preceding experiment, this cue was an image, but in this experiment the cue was a name that had been previously associated with an image. The use of name cues and name-image associates led us to make two other changes. First, we lengthened the cue lead-time from 400 msec to 1000 msec. This change was based on pervious research showing that the optimal cue lead-time for words is longer than that for images (Vickery et al., 2005; Wolfe et al., 2004). Second, for each of the three cued conditions (identical image, transformed image and same species), we added a nonspecific cue condition to measure the effects of long-term priming. As with the previous experiment, the nonspecific cue was simply the word “fish”. These nonspecific cue conditions were necessary because, unlike the previous experiment in which observers were exposed to the 100 fish images with equal frequency, the observers in this experiment were preferentially exposed to the images associated with the name cues. By including nonspecific cues along with the name cues, we could separate the contributions of long-term perceptual priming and selective attention. 
Table 1 shows the nine conditions of this experiment. These conditions were completely intermixed in 14 blocks of 40 trials each. The timing of each trial (200 msec cue, 800 msec delay) was the same as in the training session. 
Table 1
 
The nine intermixed conditions of the name cue experiment.
Table 1
 
The nine intermixed conditions of the name cue experiment.
Target
Identical image Transformed image Same species New species None
Cue Name (“lined, saddle, thread, etc.”) 40 40 40 120
Nonspecific (“fish”) 40 40 40 40 160
Results and discussion
In this search experiment, observers were cued with names that they had been trained to associate with a particular image. Although our main goal was to measure the effect of these name cues on search, we first assessed whether there were any incidental effects of the training. Because the training exposed observers to each studied image numerous times, it likely caused long-term perceptual priming of these images. But since the effects of long-term priming are independent of the observer's expectations, they can be measured readily with a nonspecific cue that does not bias these expectations. The light bars in Figure 4 (left) and the bottom row of Table 2 show the results for the four target conditions with nonspecific cues. As a measure of the within-condition variance we used the MSW term of an ANOVA, and from this we determined that mean reaction times that differed by at least 45 msec were significantly different at the p < 0.05 level. According to this criterion, the three conditions with targets related to the studied images produced significantly faster search times than the one condition with targets unrelated to the studied images. These results indicate that the training caused long-term perceptual priming of the studied images and that the effects of this priming generalized to transformations of the studied images and to members of the same species as the studied images. The differences between these three related conditions were not significant. 
Figure 4
 
Results of the name cue experiment. (Left) Light bars: The cue was nonspecific (i.e., the word “fish”) and the target varied in its relationship to one of the studied images. Dark bars: The cue was a studied name and the target varied in its relationship to the studied image associated with the name. (Right) The search advantage produced by the name cue, error bars = +1 SEM.
Figure 4
 
Results of the name cue experiment. (Left) Light bars: The cue was nonspecific (i.e., the word “fish”) and the target varied in its relationship to one of the studied images. Dark bars: The cue was a studied name and the target varied in its relationship to the studied image associated with the name. (Right) The search advantage produced by the name cue, error bars = +1 SEM.
Table 2
 
Error rates for the name cue experiment.
Table 2
 
Error rates for the name cue experiment.
Target
Identical image Transform image Same species New species None
Cue Name 3% 2% 8% 1%
Nonspecific 4% 3% 7% 6% 3%
During training, observers were also exposed to the search task and to the coral reef backgrounds. Previous research has shown that familiarity with the non-targets facilitates search even more than familiarity with the targets (Malinowski & Hübner, 2001; Mruczek & Sheinberg, 2005; Shen & Reingold, 2001; Wang, Cavanagh, & Green, 1994). This likely explains why the search times for this experiment are faster than those for the first experiment. The level of between subject variability was high, however, and the difference between the two comparable conditions (855 msec, SE = 60 msec for the nonspecific cue in Experiment 1, and 734 SE = 64 msec for the nonspecific cue/new species target in Experiment 2) was not significant (t(9) = 1.46, p = 0.161). 
Turning now to the main goal of the study, the dark bars on the left of Figure 4 show the search times for trials in which the search stimulus was preceded by a name cue. If the name cues benefited search, they should reduce search times relative to the nonspecific cues by at least 45 msec. This criterion was exceeded on name cue trials in which the target was identical to the studied image and on trials in which the targets were a transformed version of the studied image. The name cue advantage did not extend, however, to targets that belonged to the same subordinate level category as the studied image. The advantage provided by the names cues is shown explicitly at the right of Figure 4. Search times for the target absent trials with name cues and nonspecific cues were 860 and 890 msec respectively. 
These results suggest that observers trained on an image develop a search template that is largely invariant to transformations while still being highly specific for the image. In making this inference about the specificity of the template, it should be noted that the absence of a cue advantage for the same-species condition ( Figure 4, right) could reflect the averaging of positive and negative cueing effects. In the same-species condition, the targets were drawn from nine images of each species. Although the name cue did not benefit these stimuli as a group, it is possible that the cue facilitated search for some targets while impeding search for others. Because each same-species target was tested only once, our data do not allow us to examine the cueing effects for individual targets. Nonetheless, the pooled data do allow us to conclude that observers were using a search template that was more specific than the range of variation found within the species. 
General discussion
In most everyday search tasks, observers search for familiar objects: their car keys, their cell phone, their favorite brand of mustard. In performing these tasks, observers bias their visual processing in favor of stimuli that resemble a search template recalled from memory. To simulate this search task in the lab, researchers cue observers with the target image prior to the search stimulus. This method allows observers to create a specific search template, but it also activates short-term perceptual priming. In this experiment, we replaced image cues with name cues that had each been associated with a particular image. This method still allows observers to create a specific search template, but it eliminates short-term perceptual priming. 
Although our method eliminated short-term perceptual priming, it did not eliminate the effects of long-term priming. The training that caused our observers to pair each name cue with a particular image also caused these images to be detected more efficiently in the reef scenes. But because priming is independent of the observer's goal, the magnitude of this effect is the same whether or not the observer is looking for the target. Thus, any difference between the nonspecific cue conditions and the name cue conditions can be attributed to goal-directed selective attention. After controlling for the effects of long-term priming, we found a significant name cue advantage for targets that were identical to the studied images. A similar cueing advantage was also found for targets that were transformations of the studied image. The cueing advantage did not generalize, however, to targets that were members of the same subordinate-level category (same species) as the studied image. Because we controlled for potential confounds, we can confidently attribute this pattern of results to the selectivity of the search template. But this pattern is difficult to interpret without some independent measure of the perceptual similarity of our conditions. 
To aid in the interpretation of the results, we conducted another experiment that placed similar demands on the observer. This second experiment involved the same stimuli and the same task, but the name cues were replaced by image cues. The results from this experiment indicated that while the best image cues were those that were identical to the target, image cues were also effective when they were transformed versions of the target or members of the same species as the target. The finding that image cues benefited same-species targets suggests that the members of each species shared some distinctive features. Given this evidence for the perceptual similarity of the same-species images, the finding that name cues provided no benefit for this condition suggests that observers were using a search template that was relatively specific for the studied image. 
This evidence for a specific search template might seem to contradict research showing that selective attention can guide search by activating generic features like “red” or “vertical” and that it can influence processing as early as V1 (Corbetta & Shulman, 2002; Serences & Boynton, 2007; Wolfe, 2007). These earlier studies involved very simple stimuli, and it is possible that selective attention affects multiple levels in visual processing with the task determining which effects dominate performance. When the target is a vertical rectangle located among horizontal rectangles, then it seems reasonable to adopt a selection strategy that favors processing of the generic feature “vertical. In this case, the association between the feature and the target is both reliable (the target is always vertical) and diagnostic (the distractors are never vertical). But when the target is a complex object in a cluttered scene, then a strategy that involves favoring particular low-level features seems less viable. Because complex objects are associated with large numbers of features, it may be difficult to predict with any specificity which neurons in V1 are likely to be activated by the target. The observer could focus on a particular target feature, but since most features are not robust to changes in viewpoint, lighting and occlusion, it may be difficult to choose a feature that is reliably associated with the target. And because generic features are, by definition, common to many objects, it may also be difficult to choose a feature that is diagnostic of the target. For such targets, a selection strategy that involves individual features would seem to be far less effective than a selection strategy that involves patterns of features. We suggest that for everyday search tasks involving complex objects in cluttered scenes, the primary role of selective attention is to bias high-level representations of objects. 
Compared with search experiments that use very simple stimuli, this study more closely approximates everyday search. But this study is still highly unnatural in one way that is particularly crucial. In everyday vision, observers learn objects, not images. If our observers had studied real fish, then each name would have been associated with the equivalent of an infinite number of images. These images would have differed not only in orientation and scaling, as in our transformation condition, but also in shape and color. The shape variation would arise because of the non-rigidity of the fish, which have flexible bodies and movable fins. The color variation would arise because fish photographed deep underwater appear to have different colors than fish photographed in a brightly lit aquarium. If we had trained observers on fish and not fish images, then it seems very likely that they would have developed a more flexible template, and this template might have accommodated members of the same species. Clearly, an important next step for future research is to develop methods for studying how observers search for objects rather than images. 
But if our method was biased toward overly specific templates, this only strengthens our finding that the template is tolerant to changes in scale and orientation. Even though the observers were trained on a single image, they automatically generalized their representation of this image across a mirror reflection and across a range of scales and orientations. At present it is unclear whether this generalization is typical of search templates or whether it occurred because observers were already familiar with the transformations of this category (fish often appear in different orientations). Regardless, our results indicate that observers can create a specific, transformation-invariant search template after learning a single image. 
Acknowledgments
We are grateful to Bill Whitlow for many helpful discussions of this work. 
Commercial relationships: none. 
Corresponding author: Mary J. Bravo. 
Email: mbravo@camden.rutgers.edu. 
Address: 301 North Fifth St., Camden NJ 08102, USA. 
References
Blough, D. S. (2002). Measuring the search image: Expectation, detection, and recognition in pigeon visual search. Journal of Experimental Psychology: Animal Behavior Processes, 28, 397–405. [PubMed] [CrossRef] [PubMed]
Bond, A. B. Kamil, A. C. (2002). Visual predators select for crypticity and polymorphism in virtual prey. Nature, 415, 609–613. [PubMed] [CrossRef] [PubMed]
Bourke, P. A. Duncan, J. (2005). Effect of template complexity on visual search and dual-task performance. Psychological Science, 16, 208–213. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Bundesen, C. (1990). A theory of visual attention. Psychological Review, 97, 523–547. [PubMed] [CrossRef] [PubMed]
Chelazzi, L. Duncan, J. Miller, E. K. Desimone, R. (1998). Responses of neurons in inferior temporal cortex during memory-guided visual search. Journal of Neurophysiology, 80, 2918–2940. [PubMed] [Article] [PubMed]
Chelazzi, L. Miller, E. K. Duncan, J. Desimone, R. (1993). A neural basis for visual search in inferior temporal cortex. Nature, 363, 345–347. [PubMed] [CrossRef] [PubMed]
Corbetta, M. Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews, Neuroscience, 3, 201–215. [PubMed] [CrossRef]
Desimone, R. Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193–222. [PubMed] [CrossRef] [PubMed]
Duncan, J. Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458. [PubMed] [CrossRef] [PubMed]
Hamker, F. H. (2005). The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement. Cerebral Cortex, 15, 431–447. [PubMed] [Article] [CrossRef] [PubMed]
Kristjánsson, A. Wang, D. Nakayama, K. (2002). The role of priming in conjunctive visual search. Cognition, 85, 37–52. [PubMed] [CrossRef] [PubMed]
Malinowski, P. Hübner, R. (2001). The effect of familiarity on visual-search performance: Evidence for learned basic features. Perception & Psychophysics, 63, 458–463. [PubMed] [CrossRef] [PubMed]
Maljkovic, V. Nakayama, K. (1994). Priming of pop-out: I Role of features. Memory & Cognition, 22, 657–672. [PubMed] [CrossRef] [PubMed]
Mruczek, R. E. Sheinberg, D. L. (2005). Distractor familiarity leads to more efficient visual search for complex stimuli. Perception & Psychophysics, 67, 1016–1031. [PubMed] [CrossRef] [PubMed]
Najemnik, J. Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [PubMed] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Rajashekar, U. Cormack, L. K. Bovik, A. C. (2004). Point of gaze analysis reveals visual search strategies. Human Vision and Electronic Imaging IX, 5292, 296–306.
Rao, R. P. Zelinsky, G. J. Hayhoe, M. M. Ballard, D. H. (2002). Eye movements in iconic visual search. Vision Research, 42, 1447–1463. [PubMed] [CrossRef] [PubMed]
Serences, J. T. Boynton, G. M. (2007). Feature-based attentional modulations in the absence of direct visual stimulation. Neuron, 55, 301–312. [PubMed] [Article] [CrossRef] [PubMed]
Shen, J. Reingold, E. M. (2001). Visual search asymmetry: The influence of stimulus familiarity and low-level features. Perception & Psychophysics, 63, 464–475. [PubMed] [CrossRef] [PubMed]
Stevens, M. Cuthill, I. C. Windsor, A. M. Walker, H. J. (2006). Disruptive contrast in animal camouflage. Proceedings of the Royal Society B: Biological Sciences, 273, 2433–2438. [PubMed] [Article] [CrossRef]
Tinbergen, N. (1960). The natural control of insects in pine woods: I Factors influencing the intensity of predation by songbirds. Archives Neelandaises de Zoologie, 13, 265–343. [CrossRef]
Usher, M. Niebur, E. (1996). Modeling the temporal dynamics of IT neurons in visual search: A mechanism for top-down selective attention. Journal of Cognitive Neuroscience, 8, 311–327. [CrossRef] [PubMed]
Vickery, T. J. King, L. W. Jiang, Y. (2005). Setting up the target template in visual search. Journal of Vision, 5, (1):8, 81–92, http://journalofvision.org/5/1/8/, doi:10.1167/5.1.8. [PubMed] [Article] [CrossRef]
Wang, Q. Cavanagh, P. Green, M. (1994). Familiarity and pop-out in visual search. Perception & Psychophysics, 56, 495–500. [PubMed] [CrossRef] [PubMed]
Wiggs, C. L. Martin, A. (1998). Properties and mechanisms of perceptual priming. Current Opinion in Neurobiology, 8, 227–233. [PubMed] [CrossRef] [PubMed]
Wolfe, J. M. Gray, W. (2007). Guided Search 4. Integrated models of cognitive systems. (pp. 99–119). New York: Oxford.
Wolfe, J. M. Horowitz, T. S. Kenner, N. Hyle, M. Vasan, N. (2004). How fast can you change your mind The speed of top-down guidance in visual search. Vision Research, 44, 1411–1426. [PubMed] [CrossRef] [PubMed]
Zelinsky, G. J. Rao, R. Hayhoe, M. Ballard, D. (1997). Eye movements reveal the spatio-temporal dynamics of visual search. Psychological Science, 8, 448–453. [CrossRef]
Figure 1
 
Examples of the coral reef stimuli.
Figure 1
 
Examples of the coral reef stimuli.
Figure 2
 
The targets were drawn from 100 fish images representing ten fish species. One image from each of nine species is shown (top), along with six images of the tenth species (bottom). The names assigned to the ten species were (left to right, top to bottom) shoulder, zebra, emperor, copper, blue, pyramid, lined, thread, powder, and saddle.
Figure 2
 
The targets were drawn from 100 fish images representing ten fish species. One image from each of nine species is shown (top), along with six images of the tenth species (bottom). The names assigned to the ten species were (left to right, top to bottom) shoulder, zebra, emperor, copper, blue, pyramid, lined, thread, powder, and saddle.
Figure 3
 
Results of the image cue experiment. (Left) Search times for fish targets preceded by an image cue that was either identical to the target, a rotated and scaled version of the target, a member of the same species as the target, or simply the word “fish”. (Right) The difference in search times between the nonspecific cue condition and each of the image cue conditions (error bars = +1 SEM).
Figure 3
 
Results of the image cue experiment. (Left) Search times for fish targets preceded by an image cue that was either identical to the target, a rotated and scaled version of the target, a member of the same species as the target, or simply the word “fish”. (Right) The difference in search times between the nonspecific cue condition and each of the image cue conditions (error bars = +1 SEM).
Figure 4
 
Results of the name cue experiment. (Left) Light bars: The cue was nonspecific (i.e., the word “fish”) and the target varied in its relationship to one of the studied images. Dark bars: The cue was a studied name and the target varied in its relationship to the studied image associated with the name. (Right) The search advantage produced by the name cue, error bars = +1 SEM.
Figure 4
 
Results of the name cue experiment. (Left) Light bars: The cue was nonspecific (i.e., the word “fish”) and the target varied in its relationship to one of the studied images. Dark bars: The cue was a studied name and the target varied in its relationship to the studied image associated with the name. (Right) The search advantage produced by the name cue, error bars = +1 SEM.
Table 1
 
The nine intermixed conditions of the name cue experiment.
Table 1
 
The nine intermixed conditions of the name cue experiment.
Target
Identical image Transformed image Same species New species None
Cue Name (“lined, saddle, thread, etc.”) 40 40 40 120
Nonspecific (“fish”) 40 40 40 40 160
Table 2
 
Error rates for the name cue experiment.
Table 2
 
Error rates for the name cue experiment.
Target
Identical image Transform image Same species New species None
Cue Name 3% 2% 8% 1%
Nonspecific 4% 3% 7% 6% 3%
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×