Free
Review  |   December 2011
Visual search: A retrospective
Author Affiliations
Journal of Vision December 2011, Vol.11, 14. doi:https://doi.org/10.1167/11.5.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Miguel P. Eckstein; Visual search: A retrospective. Journal of Vision 2011;11(5):14. https://doi.org/10.1167/11.5.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search, a vital task for humans and animals, has also become a common and important tool for studying many topics central to active vision and cognition ranging from spatial vision, attention, and oculomotor control to memory, decision making, and rewards. While visual search often seems effortless to humans, trying to recreate human visual search abilities in machines has represented an incredible challenge for computer scientists and engineers. What are the brain computations that ensure successful search? This review article draws on efforts from various subfields and discusses the mechanisms and strategies the brain uses to optimize visual search: the psychophysical evidence, their neural correlates, and if unknown, possible loci of the neural computations. Mechanisms and strategies include use of knowledge about the target, distractor, background statistical properties, location probabilities, contextual cues, scene context, rewards, target prevalence, and also the role of saliency, center–surround organization of search templates, and eye movement plans. I provide overviews of classic and contemporary theories of covert attention and eye movements during search explaining their differences and similarities. To allow the reader to anchor some of the laboratory findings to real-world tasks, the article includes interviews with three expert searchers: a radiologist, a fisherman, and a satellite image analyst.

 

There are official searchers, inquisitors. I have seen them in the performance of their function: they always arrive extremely tired from their journeys; they speak of a broken stairway which almost killed them; they talk with the librarian of galleries and stairs; sometimes they pick up the nearest volume and leaf through it, looking for infamous words. Obviously, no one expects to discover anything.”

 

The Library of Babel

 

Jorge Luis Borges

 
Who searches?
Everyone. Moreover, everyone searches all the time. 1 This might be considered a bold premise since it is difficult to estimate the fraction of a day that a typical human engages in what researchers might refer to as a classically defined visual search for a target among distractors. The search for one's vehicle in a parking lot, keys in a living room, or a friend in a crowd would all likely qualify as visual search even when using the strictest of definitions. However, these tasks hardly occupy the whole day. Yet, fixational eye movements and object localizations also anticipate motor actions: grabbing one's coffee cup and placing a key to open a door or start the car are all preceded by a quick fixation on the object (Figure 1a; Hayhoe & Ballard, 2005; Land, 2009; Land & Hayhoe, 2001; Mennie, Hayhoe, & Sullivan, 2007). If we consider these as short and relatively easy searches, then the statement “everyone searches all the time” becomes less far-fetched. In modern western societies where humans spend a larger portion of time in front of display screens, a substantial amount of this time is spent searching for computer icons or phone applications, a keyword in a web page, or the protagonist in a new scene of a movie. Visual search also involves a variety of important tasks in modern society (Figure 1b) ranging from an operator inspecting the fuselage of an airplane for cracks, scrutiny of satellite images for national security, or natural disaster monitoring to doctors searching for abnormalities in a variety of medical imagery. 
Figure 1
 
(a) Actions are anticipated by relatively easy searches. Reproduced from Hayhoe and Ballard (2005). Fixations made by an observer while making a peanut butter and jelly sandwich. Fixations are shown as yellow circles, with diameter proportional to fixation duration. Red lines indicate the saccades. Note that almost all fixations fall on task-relevant objects. (b) Photo interpreter during the 2nd World War scrutinizing an image taken by photoreconnaissance aircraft (reproduced from “To fool a Glass Eye” by Stanley, 1988). (c) Finding a prey in the Savannah, a task vital for survival for many animals. (d) Image of a portion of NASA's Stardust Collector plate scanned with an optical microscope.
Figure 1
 
(a) Actions are anticipated by relatively easy searches. Reproduced from Hayhoe and Ballard (2005). Fixations made by an observer while making a peanut butter and jelly sandwich. Fixations are shown as yellow circles, with diameter proportional to fixation duration. Red lines indicate the saccades. Note that almost all fixations fall on task-relevant objects. (b) Photo interpreter during the 2nd World War scrutinizing an image taken by photoreconnaissance aircraft (reproduced from “To fool a Glass Eye” by Stanley, 1988). (c) Finding a prey in the Savannah, a task vital for survival for many animals. (d) Image of a portion of NASA's Stardust Collector plate scanned with an optical microscope.
For non-human animals, search is more vital than for humans since survival depends highly on finding food (Figure 1c) and avoiding predators. Non-human primates have very similar foveated visual systems to those of humans while other non-primate mammals can vary widely in the organization of their retina (Land & Nilsson, 2002). The varying resolution of visual processing across the visual field motivates organisms to make eye movements to point the high-resolution region of the eye toward objects of interest. However, irrespective of the characteristics of the space-variant (or invariant) visual processing of the organism (Land, 1999), the three-dimensional nature of space, and the uncertainty in the location of food, preys and predators require organisms to search and localize targets. 
In addition to its ecological relevance, visual search is constituted by a set of complex behaviors that encompass many aspects central to human visual and cognitive function. Human visual search involves oculomotor control through fixational eye movements, covert visual attention, understanding of differences in visual processing across retinal locations, temporal integration of information across eye movements as search progresses, memory for scene configurations, and decision strategies. In this sense, studying visual search provides clues about how the brain coordinates a variety of functions. It is thus not surprising that it has been used as a framework to study many aspects of cognitive and visual function. 
Finally, visual search by artificial entities is gaining a more important role in society. For example, systems in intelligent vehicles that can detect pedestrians are starting to be incorporated in high-end cars. Computer vision detection (Yang & Huang, 1994) and identification of faces as well as automated food inspection (Brosnan, 2004) have also become growing fields. Computer-aided detection of abnormalities in medical images such as X-ray mammograms is now used in three out of four mammograms read in US clinics (Rao et al., 2010). 
Even with the incredible advances in computer vision, it is fair to say that when humans are fully engaged on task, they remain unsurpassed by machines in their ability for a majority of visual search tasks. As an illustrative example, in 2006, NASA's Stardust spacecraft's capsule returned to Earth with a plate containing an aerogel intended to collect the particles of interstellar dust. Optical microscopes were used to scan the collector plates at different depths of field generating more than 350,000 movies. The tiny size of the particles and their tracks relative to the collector plate size (1,000 cm2) makes this search through the images difficult, comparable to finding a handful of ants in a football field (Figure 1d). 2 Although the Berkeley Space Laboratory initially considered computer vision solutions, the search was deemed to be challenging because: (1) Algorithms required a large data set of images with known particles that were unavailable 3 ; (2) The collector plate contained flaws, cracks, and an uneven surface that could be confused with the dust particles (Figure 1d). Thus, instead of using computer vision, the Stardust Project (http://stardustathome.ssl.berkeley.edu/) embarked in search for the dust particles by enlisting the help of thousands of human searchers. The users could download the images onto their computers and learn the task, which led to the successful finding of a subset of particles. How were humans able to learn this complex visual search task that represented a challenging problem to machines? What set of tricks does the brain use? This is the starting point for this review article, which takes a different perspective than other thorough treatments of visual search (Findlay & Gilchrist, 2003; Nakayama & Martini, 2011; Verghese, 2001; Wolfe, 1998), in identifying the strategies and mechanisms that the brain utilizes to optimize visual search as well as discussing where in the brain might these computations be implemented. I cover progress in our understanding of visual search relying on a variety of subfields spanning cognitive psychology, visual psychophysics, computational modeling, animal neurophysiology, human electrophysiology, and neuroimaging. In addition, I devote separate sections to models of eye movements (overt attention) and models of shifts of attention decoupled from eye movements (covert attention). The article is subdivided into six sections: what limits visual search performance?; brain strategies and mechanisms to optimize visual search; models of covert attention during search; models of eye movements during search; visual search in the real world; epilogue: The library of Babel, sardines in the Caribbean, and IBM's Deep Blue. 
Accompanying these sections that one would typically encounter in a review article are interviews with three expert searchers (a radiologist, a fisherman, and a satellite image analyst), which are intended to enrich the readers' understanding of the process of visual search, allowing us to connect laboratory findings with the real world and motivate future studies. 
What limits visual search performance?
In trying to uncover and understand the strategies, computations, and neural mechanisms mediating visual search, a helpful starting point is to identify the factors limiting visual search performance. Why is visual search not always instantaneously fast and, in some instances, painfully slow? Why do we miss targets? In addition, why do we sometimes incorrectly decide we have found a target when in fact we have not? 
Foveated visual systems and the degraded vision of the periphery
The visual system of primates and other animals (e.g., birds) has inhomogeneous processing across the visual field. For humans and many animals, the light falling in the foveal area receives preferential processing leading to higher grating acuity (Wertheim, 1894; see adapted figures in Findlay & Gilchrist, 2003), vernier acuity (Levi, Klein, & Aitsebaomo, 1985), and contrast sensitivity (Rovamo, Leinonen, Laurinen, & Virsu, 1984). These benefits are explained in great part due to a higher density of foveal photoreceptors when compared to the visual periphery and the relative higher number of neurons in the primary visual cortex and other areas dedicated to the fovea (cortical magnification; Daniel & Whitteridge, 1961; Duncan & Boynton, 2003). For visual search, increasing retinal eccentricity of search array elements will reduce accuracy (Figure 2) for briefly presented displays (Geisler & Chou, 1995) and increase search times and number of eye movements (Figure 2) with longer presentations of search arrays (Scialfa & Joffe, 1998), although there are a few notable exceptions 4 . The detrimental effect of adding distractors on performance (set-size effect) will increase with the retinal eccentricity of the elements (Carrasco, Evert, Chang, & Katz, 1995; Scialfa & Joffe, 1998). In addition, crowding is known to be an important limiting factor in the visual periphery, leading to large increases in response times during search (Vlaskamp & Hooge, 2006). When the target in isolation is easily detectable in the periphery, crowding it with other elements decreases the number of saccades toward the target (Vlaskamp, Over, & Hooge, 2005). 
Figure 2
 
Effect of retinal eccentricity on (top) search accuracy, (middle) reaction time, and (bottom) number of saccades for feature (orientation, contrast) and conjunction displays (TP = target present; TA = target absent displays). Reproduced from Scialfa and Joffe (1998). Conjunction displays are those for which the target can be distinguished from the distractors only by the joint presence of two features.
Figure 2
 
Effect of retinal eccentricity on (top) search accuracy, (middle) reaction time, and (bottom) number of saccades for feature (orientation, contrast) and conjunction displays (TP = target present; TA = target absent displays). Reproduced from Scialfa and Joffe (1998). Conjunction displays are those for which the target can be distinguished from the distractors only by the joint presence of two features.
Visual clutter, which entails crowding, masking, decreased recognition performance due to occlusion, and greater difficulty at segmenting a scene, also affects visual search (Rosenholtz, Li, & Nakano, 2007). Periphery-limited vision motivates organisms to point the high-resolution fovea via eye and head movements to regions of interest in the search array or scene. Of course, if the task is very easy such as finding a red circle among green circles, then peripheral processing will suffice to detect the target and eye movements fixating the elements will not benefit performance (Eckstein, Beutter, & Stone, 2001; Klein & Farrell, 1989). 
Variability in the visual environment and uncertainty about target parameters
Even if an organism does not have inhomogeneities in visual processing across the visual field, visual search would still not be error-free. For example, computer vision systems do not typically have foveated processing but still can fail to find or identify a target. The reason for the error is due to the fact that the computer vision system has an internal mathematical representation of the target that does not always match the representation arising from the instance of the target in the image being searched. This could be due to variability in the lighting (Ming-Hsuan, Kriegman, & Ahuja, 2002), differing viewpoints of the target, the presence of other objects that obscure or block the target, image acquisition noise from various sources, unaccounted changes in the shape or visual properties of the target (Ming-Hsuan et al., 2002), or simple lack of prior knowledge in the computer vision system about the exact visual attributes of the target. Similarly, variability in the environment can make a distractor confusable with the target. For example, another object might partially block a distractor revealing only a portion that is similar to the target leading to a spurious target detection decision. Image acquisition noise might make a part of normal anatomy in a medical image mimic a tumor. All of these factors will lead to search and identification errors. Biological organisms are confronted with the same obstacles as machines during visual search and their performance will be limited by variability and uncertainty. 
Stochastic neural processing leads to confusability of searched items
Even in the absence of variability in the visual environment and with the assumption that the organism has full knowledge about the target or target visual attributes, animals would still make search mistakes. Visual coding is limited by the stochastic nature of neural processing (Tolhurst, Movshon, & Dean, 1983). Due to this variability, there is always a probability that the firing rate of a neuron or a population of neurons to a distractor might be similar to the response or distribution of responses expected to be elicited by the target, resulting in a false alarm decision: incorrectly deciding that a target is present when it is absent or incorrectly identifying a distractor as a target. The inherent limitation of perceptual decisions due to stochastic noise is a well-established tenet of visual psychophysics (Green & Swets, 1989) and the neurophysiology of perceptual decisions (Parker & Newsome, 1998) but has been less prevalent in the search literature from the field of cognitive psychology. As we will cover in later sections, using stochastic neural processing as a starting point in analyzing human performance can have important ramifications in interpretations of why search performance degrades when increasing the number of distractors (set-size effects). 
Limitations of covert attention and memory
Having to covertly attend (while maintaining fixation) to multiple locations or things can have detrimental effects on perceptual performance due to capacity limitations. The nature of these capacity limitations has been a topic of extensive research (for a review, see Carrasco, 2011). Some have proposed that covert attention is temporally serial, processing one item at a time, and thus, the more items in the search display, the longer it takes for the covert serial processor to process all items (Treisman & Gelade, 1980). Others have proposed that processing is parallel but has limited capacity (Bundesen, Habekost, & Kyllingsbaek, 2005): Either the rate at which each item is processed is inversely related to the number of items that are processed in parallel or the quality of processing of an item degrades as processing capacity has to be shared across multiple items. Another proposition is that, at least for many (but not all) search tasks, human performance simply reflects the stochastic nature of visual processing rather than being a consequence of a capacity limitation (Palmer, Verghese, & Pavel, 2000). The Visual search as a tool to study models of covert attention section will discuss these models in detail. 
There is little argument that memory is limited in its capacity (Miller, 1956). If we did not have visual memory capacity nor retrieval limits, it would greatly facilitate our daily experience of visual search: We would not waste minutes searching for our cars in parking lots or have trouble searching for a person that we recently met. There are multiple memory systems with different limitations (Schacter & Tulving, 1994). The current article will not cover visual memory in detail but for an in-depth review I refer the reader to a review in this 10th year anniversary issue of JOV (Brady, Konkle, & Alvarez, 2011). 
Brain strategies to optimize visual search performance
Given the factors that limit search performance, what are the strategies the human/animal brain implements in order to optimize search performance? In this section, I review a battery of different strategies and mechanisms that seem crucial in optimizing search performance. 
Saliency
The term saliency has unfortunately been used rather broadly in the literature to refer to: (1) the visibility of different regions in an image irrespective of the behavioral goals (bottom-up information), (2) a visibility measure integrating both bottom-up information and top-down task-relevant goals, or (3) as a general term for visibility, interchangeably referring to either of the above definitions. 
In this manuscript, we will refer to saliency as it was originally proposed (Itti & Koch, 2000), which is a calculation of the visibility of different regions within an image assuming some simplified neurophysiologically based model and importantly without taking into consideration behavioral goals in search. In a later section, I will discuss in details the merits of the original model in predicting where humans will direct their eyes during search. Here, I will restrict our discussion to whether saliency might have some role in optimizing visual search. If we consider an impending search for a target or battery of targets, as many have shown (see Tatler, Hayhoe, Land, & Ballard, 2011 for a review), saliency in isolation might have little value in optimizing search. 5 However, the usefulness of saliency computations and models is that they might be a reasonable approximation of what regions of an image might be utilized by the brain in a task that has not yet been specified. If we think of performance across all possible tasks that one might do with an image or scene, it might be useful to foveate salient regions in an image rather than non-salient. This is consistent with recent findings showing that saccades with short latencies are driven more by saliency and those with longer latencies are influenced by behavioral goals (Stritzke, Trommershäuser, & Gegenfurtner, 2009). This might suggest that the brain is unable to use task-relevant information to plan short latency saccades and defaults to a saliency-driven strategy. For the case of abrupt onsets and motion in the periphery that typically trigger eye movements and capture attention (Hillstrom & Yantis, 1994; Jonides & Yantis, 1988; Schreij, Owens, & Theeuwes, 2008; Theeuwes, 1991; Yantis & Jones, 1991), these can be cues to signal an approaching object or organism and could be considered as part of an array of tasks that are permanently relevant: avoid getting hit or eaten by another organism. However, with practice capture by these transients can sometimes be reduced if they are task irrelevant (Folk, Remington, & Johnston, 1992; Folk, Remington, & Wright, 1994) or predictable (Ludwig, Ranson, & Gilchrist, 2008; but see Schreij et al., 2008). 
Neural mechanisms: A number of investigators have suggested that various stages in neural processing produce a bottom-up saliency map from primary visual cortex V1 (Li, 2002) to the parietal area LIP (Goldberg, Bisley, Powell, & Gottlieb, 2006; Gottlieb, Kusunoki, & Goldberg, 1998). However, many of the neurophysiology studies are consistent with behavioral studies suggesting that neuronal activity is diminished when the salient stimuli are behaviorally irrelevant (Ipata, Gee, Gottlieb, Bisley, & Goldberg, 2006). Similarly, human electrophysiology studies measuring electroencephalograms (EEGs) have demonstrated that salient features elicit a stronger event related potential (ERP) component related to spatial attention, N2pc, but only when task relevant (Schubö, Schröger, Meinecke, & Müller, 2007). 
Knowledge about the visual properties of the environment
Knowledge of target visual properties
Observers' prior knowledge about the physical characteristics of a target is arguably one of the most important factors for efficient search 6 and will often dramatically improve detection performance. Expert searchers will often cite target knowledge as one key factor in minimizing errors and successfully finding a target (e.g., see interviews with expert searchers in this manuscript). Conversely, lack of knowledge (uncertainty) about the physical attributes of a target will slow search and lead to more search errors. For example, not knowing the color of your friends' t-shirt can make it much more difficult to find them in a concert venue. 
In classic behavioral studies with birds, Tinbergen (1960) proposed the use of a “search image,” that contained unique features characterizing preys, that the animal developed to efficiently search (Bond, 1983). Classic human psychophysics experiments have shown that uncertainty about physical attributes of a signal including spatial frequency (Davis & Graham, 1981; Davis, Kramer, & Graham, 1983), spatial location (Cohn & Lasley, 1974; Cohn & Wardlaw, 1985), spatial/temporal phase (Burgess & Ghandeharian, 1984a; Eckstein, Whiting, & Thomas, 1996), and shape of the target (Burgess, 1985; Figure 3a) will degrade detection performance. These findings generalize to multilocation search in white (Judy, 1995), correlated (Eckstein & Abbey, 2001), and structured noise (Castella et al., 2009; Zhang, Pham, & Eckstein, 2004) with target size uncertainty, although the detriment to human performance from not knowing the size of the target is often less than to that of an ideal observer, and in some instances, subtle shape uncertainty (Castella et al., 2009) does not affect human performance. 
Figure 3
 
(a) Effect of target knowledge on signal detection in white noise. Index of detectability as a function of signal-to-noise ratio (increasing signal contrast) for two human observers (AB and RA) for a task in which the signal shape is known (SKE—signal known exactly) and a condition in which the signal is one out of 10 possible signals (M = 10) and not known to observers. Predictions from an ideal observer—continuous lines—are shown for comparison: M = 1 (SKE), M = 10, and M = 100 (from Burgess, 1985). (b) Effects of type of target cue (identical to target, transformed image of target, an image of another fish from the same species) on reaction time (cue advantage in ms) finding a target fish in images (Bravo & Farid, 2009). (c) Estimated templates (classification images) for saccades and perception during search for a Gaussian target in noise. Top image: Raw classification images. Bottom image: Radial profiles of estimated templates fit with Differences of Gaussians functions and overlaid with the Gaussian target's radial profile (Eckstein et al., 2007). (d) Spatial frequency and orientation context of estimated receptive fields in V4 during overt search for different targets (T1 and T2). Rightmost box shows difference between two receptive fields in terms of high-spatial frequency content (from Mazer & Gallant, 2003). (e) Posterior parietal cortex of human (left) and macaque monkey (right). (Left) The human posterior parietal cortex (PPC) is divided by the intraparietal sulcus (IPS) into the superior parietal lobe (SPL) and the inferior parietal lobe (IPL). (Right) The lunate and intraparietal sulci are opened up to show the locations of several extrastriate areas in addition to the visually responsive areas within the intraparietal sulcus. These include the parieto-occipital area (PO), the posterior intraparietal area (PIP), the medial intraparietal area (MIP), the lateral intraparietal area (LIP), the ventral intraparietal area (VIP), and the anterior intraparietal area (AIP). Figure from Bisley and Goldberg (2010). Adapted from Colby, Gattass, Olson, and Gross (1988) and Husain and Nachev (2007).
Figure 3
 
(a) Effect of target knowledge on signal detection in white noise. Index of detectability as a function of signal-to-noise ratio (increasing signal contrast) for two human observers (AB and RA) for a task in which the signal shape is known (SKE—signal known exactly) and a condition in which the signal is one out of 10 possible signals (M = 10) and not known to observers. Predictions from an ideal observer—continuous lines—are shown for comparison: M = 1 (SKE), M = 10, and M = 100 (from Burgess, 1985). (b) Effects of type of target cue (identical to target, transformed image of target, an image of another fish from the same species) on reaction time (cue advantage in ms) finding a target fish in images (Bravo & Farid, 2009). (c) Estimated templates (classification images) for saccades and perception during search for a Gaussian target in noise. Top image: Raw classification images. Bottom image: Radial profiles of estimated templates fit with Differences of Gaussians functions and overlaid with the Gaussian target's radial profile (Eckstein et al., 2007). (d) Spatial frequency and orientation context of estimated receptive fields in V4 during overt search for different targets (T1 and T2). Rightmost box shows difference between two receptive fields in terms of high-spatial frequency content (from Mazer & Gallant, 2003). (e) Posterior parietal cortex of human (left) and macaque monkey (right). (Left) The human posterior parietal cortex (PPC) is divided by the intraparietal sulcus (IPS) into the superior parietal lobe (SPL) and the inferior parietal lobe (IPL). (Right) The lunate and intraparietal sulci are opened up to show the locations of several extrastriate areas in addition to the visually responsive areas within the intraparietal sulcus. These include the parieto-occipital area (PO), the posterior intraparietal area (PIP), the medial intraparietal area (MIP), the lateral intraparietal area (LIP), the ventral intraparietal area (VIP), and the anterior intraparietal area (AIP). Figure from Bisley and Goldberg (2010). Adapted from Colby, Gattass, Olson, and Gross (1988) and Husain and Nachev (2007).
In recent years, investigators have extended these results from accuracy to search times and from noise-limited imagery to images of real objects (Bravo & Farid, 2009; Vickery, King, & Jiang, 2005). Reducing target uncertainty by showing a preview picture of the target reduces search times (Figure 3b; Bravo & Farid, 2009; Vickery et al., 2005). The benefit to search times depends on whether the picture is a replica of the target or whether it belongs to the class of objects that the target is drawn from (Bravo & Farid, 2009). Actual pictures of the targets reduce the time to find a target more than word cues (Wolfe, Horowitz, Kenner, Hyle, & Vasan, 2004). Search templates are target-specific but relatively tolerant to changes in scale and orientation (Bravo & Farid, 2009) but can also lead to some search time cost (Vickery et al., 2005). 
Eye movements during search are also guided toward the target (Williams, 1966, 1967) and are also directed more often toward distractors that share some physical attribute with the target than those that do not (Findlay, 1997; Motter, 1994). In fact, as with perceptual decisions, the accuracy of the saccades is modulated by the signal-to-noise ratio (the detectability or discriminability of the target in the periphery; Eckstein et al., 2001; Findlay, 1997). 
Investigators have used a technique known as classification images (Murray, 2011) to estimate from noise samples in the image and observers' decisions the underlying spatial features used by observers to direct eye movements and make perceptual decision during search. These studies confirm that indeed both human perceptual decisions and eye movements during search are mediated by mechanisms that use the spatial properties of the target Figure 3c (Eckstein, Beutter, Pham, Shimozaki, & Stone, 2007; Ludwig, Eckstein, & Beutter, 2007; Rajashekar, Bovik, & Cormack, 2006; Tavassoli, Linde, van der Bovik, & Cormack, 2009) although there are limits in the flexibility of the template in the visual periphery (Ludwig et al., 2007). Recent studies have investigated eye movement patterns with real-world scenes showing that specificity of a pre-cue of the target will have an impact on the number of scene regions visited and the mean fixation duration (Malcolm & Henderson, 2009, 2010; also see Castelhano & Heaven, 2010). 
Neural mechanisms: Where are the neural mechanisms coding target properties? There is evidence from monkey physiology that neurons in various cortical areas code the physical attributes of the target. Importantly, unlike lower visual areas (e.g., V1, MT) in which neurons typically respond to fixed physical attributes (e.g., specific orientation or motion direction), neurons in search-related areas (V4, frontal eye fields, lateral intraparietal area, and superior colliculus) will fire to behaviorally relevant target properties irrespective of the physical attribute (Ptak, 2011). Neurons in area V4 Figure 3e respond to the color defining the target (Motter, 1994) but also to the spatial frequency characteristics of the target (Figure 3d) mimicking a matched filter (Mazer & Gallant, 2003) and paralleling the human templates estimated from classification image studies (Eckstein et al., 2007). Neurons in the frontal eye fields (FEFs) respond more strongly to a color when it is contained in the sought target (Schall & Hanes, 1993) than when it belongs to a distractor. Similarly, the lateral intraparietal cortex (LIP; Figure 3e, right) responds to visual attributes if these belong to a target rather than a distractor (Toth & Assad, 2002; for a review, see Bisley, Mirpour, Arcizet, & Ong, 2011). Superior collicolus (SC) cells also show selectivity for targets whether they are defined by color (McPeek & Keller, 2002) or motion (Krauzlis & Dill, 2002). Inactivation of each of these areas will result in behavioral deficits (Lovejoy & Krauzlis, 2010; FEF: Monosov & Thompson, 2009; LIP: Balan & Gottlieb, 2009). For example, LIP inactiviation will slow search for a target among distractors (Balan & Gottlieb, 2009; Wardak, Olivier, & Duhamel, 2004). 
Whether the neurons respond to visual stimuli falling in their respective receptive fields or whether the activity preempts an eye movement to the location of the stimuli has been a topic of debate for the three areas: FEF, SC, and LIP. Some FEF neurons will respond only when an eye movement is executed toward the target, but others will still fire to the target even without an eye movement (Schall, 2004; Thompson, Bichot, & Schall, 1997). For SC, inactivation will impair perceptual judgments (Lovejoy & Krauzlis, 2010), suggesting that its activity is not simply linked to saccade programming. 
All four areas, FEF (Thompson & Bichot, 2005), LIP (Bisley & Goldberg, 2010; Gottlieb et al., 1998), SC (Fecteau & Munoz, 2006), and V4 (Mazer & Gallant, 2003), have been proposed as a priority map 7 with the maximum activity indicating the destination of the next saccade and possibly the final choice of a search decision. 
Human EEG studies have also identified target-related neural activity. Studies have shown that the presence of a target in the midst of distractors will elicit a large amplitude in an ERP component following 200 ms after stimulus onset (N2pc; Eimer, 1996; Luck & Hillyard, 1994a, 1994b) but also earlier at 100 ms (P1; Luck & Hillyard, 1994b). Eimer, Kiss, and Nicholas (2011) have shown that specifying target-defining features in advance during search modulates ERP components related with spatial attention (N2pc) and working memory processing (SPCN). 
Functional magnetic resonance imaging (fMRI) studies suggest that a frontoparietal network, including parts of the intraparietal cortex (Figure 3e, left) and superior frontal cortex, is involved in goal-directed selection of stimuli (Corbetta & Shulman, 2002; Giesbrecht, Woldorff, Song, & Mangun, 2003). This selection of visual attributes (e.g., color) might be mediated by the posterior parietal cortex (PPC Figure 3e, left; Greenberg, Esterman, Wilson, Serences, & Yantis, 2010). There is recent work suggesting that the intraparietal sulcus (IPS Figure 3e, left) might be involved in the integration of spatial and feature information (Egner et al., 2008) and also that it mediates the coding of the presence of targets in natural scenes (Guo, Das, Giesbrecht, & Eckstein, 2010). 
Knowledge of distractors and noise statistical properties
The presence of other visual forms that resemble the target, often referred to as distractors, can lead to incorrect decisions such as concluding that a target is present when it is absent (false positives or false alarms) or mislocalizations of the target. In addition, the presence of distractors can slow search by requiring observers to spend time scrutinizing potential targets that are then rejected. Distractors can be an animal that contains visual features of a predator, cars in the parking lot that have a similar color to the sought car, or part of normal anatomy in a medical image that mimics a lesion. 
Knowledge of the physical properties of the distractors present in the displays and the visual attributes that distinguish them from the target can improve search performance. Conversely, distractor heterogeneity will typically degrade search accuracy (Avraham, Yeshurun, & Lindenbaum, 2008; Duncan & Humphreys, 1989; Nagy, Neriani, & Young, 2005; Rosenholtz, 2001; but see Nagy & Thomas, 2003; Vincent, Baddeley, Troscianko, & Gilchrist, 2009 for exceptions). Rosenholtz (2001) showed that adding distractor variability by replacing a subset of distractors with new distractors that are more discriminable from the target still degraded search performance. Oddity search is a particular search for which the target is solely defined as being different from the distractors but the observer does not know a priori which item is the target and which elements are the distractors. Oddity search will typically lead to worse search performance when compared to a condition for which the observer knows in advance the features defining distractors and target 8 (Schoonveld, Shimozaki, & Eckstein, 2007). 
When searching for targets in noise, observers also seem to use strategies that take into account the statistical properties of the noise to optimize performance. The observer's perceptual template will compensate for the statistical characteristics of the noise. For example, in the presence of noise with more energy in the lower spatial frequencies (low-pass noise), the optimal strategy is to give more weight to the higher frequencies in the target. Psychophysical studies (Burgess, Li, & Abbey, 1997; Rolland & Barrett, 1992) and classification image studies (Abbey & Eckstein, 2007) indicate that humans can modify their perceptual templates to compensate both for the spatial frequency content and also the orientation energy of the noise (Zhang, Abbey, & Eckstein, 2006), albeit not optimally. 
A second category of distractors are those that do not share visual properties with the target but have abrupt onsets (Theeuwes, 1991; Yantis & Jonides, 1990) or have very salient features (Theeuwes & Burger, 1998) that will capture attention and trigger an eye movement toward them. These distractors can have detrimental effects on search performance but, with practice, can be overcome (Folk et al., 1992; Geyer, Müller, & Krummenacher, 2008; but for limits, see Schreij et al., 2008). 
Neural mechanisms: There are fewer studies concentrating on the impact of distractor variability, observer knowledge, and uncertainty about distractors on neural activity. Schubö, Wykowska, and Müller (2007) have shown that an ERP component that is triggered by the presence of a search target (N2pc) is reduced as the distractors become more heterogeneous. In addition, fMRI activity increases in the precuneus, cingulate gyrus, and the middle temporal gyrus as homogeneity of surrounding distractors increases (Schubö, Akyürek, Lin, & Vallines, 2011). 
How might the brain actively use information about distractors that resemble the target to enhance search? Models provide hints as to possible mechanisms to either combine responses of elementary receptive fields (Shimozaki, Eckstein, & Abbey, 2003a) or sample tuning functions (Navalpakkam & Itti, 2007) taking into account knowledge of distractors to optimize visual search performance. LIP would seem a plausible location where neurons would become less sensitive to visual attributes that are shared between a target and a distractor. Ipata et al. (2006) have shown that LIP neurons suppress activity to highly visible (popout) distractors and that the suppression correlates with the monkey's ability to ignore the salient distractor and make an eye movement to the target. Although the study was conducted with salient distractors that were not easily confusable with the target, it seems likely that LIP could also be active in suppressing neural activity to properties of the target that are shared with the distractors. Human electrophysiology studies have also found an ERP component (Ptc, a positive component with a more temporal scalp distribution than the N2pc) that might be related to the suppression of the processing of distractors (Hickey, Di Lollo, & McDonald, 2009; Hilimire, Mounts, Parks, & Corballis, 2011). Finally, unknown are the brain areas optimizing the receptive fields to reflect suppression of spatial frequencies or orientations that are prevalent in the background image noise and thus should be weighted less. One possibility is that this might occur as early as V4 given that Mazer and Gallant (2003) have documented shifts in the spatial frequency of the receptive fields to match the target's spatial frequency properties. There is also evidence that as early as in retinal ganglion cells of rabbits and salamanders, there is an adaptation process to the statistical properties of the visual environment making receptive fields less responsive to prevalent visual information (Hosoya, Baccus, & Meister, 2005). Such an adaptation process might, in principle, account for some of the suppression in orientations and frequencies most prevalent in backgrounds. 
Statistical regularities that reduce uncertainty of the target location
Uncertainty regarding the spatial position of a target is at the core of what makes visual search difficult. This is particularly true when the target detectability is low (due to low contrast or high image noise) or when it is surrounded by confusable distractors. Even if the task requires only determining the presence of the target and does not explicitly require the observer to localize the target, uncertainty about target location often has detrimental effects on performance. On the other hand, any statistical regularity in the environment that can potentially reduce the uncertainty about the location of a target will often improve search performance. 
Target probabilities varying across locations and predictive cues
When a location or a subset of locations has an increasing probability of containing the target, search performance often improves (Miller, 1988; search accuracy: Druker & Anderson, 2010; Geng & Behrmann, 2005; Vincent, 2011b). In addition, eye movements are more often directed to high-probability locations of containing the target (Walthew & Gilchrist, 2006). Although initially evidence suggested that the effect is entirely related to repetition of trials with the target at the same location (repetition priming; Maljkovic & Nakayama, 1996), that view has recently been challenged (Druker & Anderson, 2010). 
If a target often co-occurs with other highly visible elements (cues), human target detection performance is also often facilitated (e.g., search times: Posner, Snyder, & Davidson, 1980; accuracy: Luck et al., 1994; Palmer, Ames, & Lindsey, 1993; Smith & Ratcliff, 2009). In these studies, the observer is often informed about the relationship between the cue and the target presence. A large number of studies have demonstrated how predictive pre-cues or simultaneously presented cues improve detection, identification, and localization performance while observers are maintaining fixation without making eye movements (Busey & Palmer, 2008; Cameron, Tai, Eckstein, & Carrasco, 2004; Eriksen & Yeh, 1985; for a review, see Carrasco, 2011). 
When eye movements are allowed in difficult tasks limited by the background variability, cues that reduce the uncertainty about target location will also improve localization accuracy (white noise: Burgess & Ghandeharian, 1984b; Swensson & Judy, 1981; structured medical image backgrounds: Bochud, Abbey, & Eckstein, 2004; Eckstein & Whiting, 1996). 
If the cues have varying probabilities of predicting the presence of the target, multiple fixation search performance will improve when the target appears with the high probability cue rather than a low probability cue (Droll, Abbey, & Eckstein, 2009). Saccades will also be more frequently directed toward synthetic cues that are predictive of the location of a target when observers are instructed to saccade to the target (Liston & Stone, 2008) or simply to search for the target without any specification of an eye movement plan (Droll et al., 2009). 
Contextual cuing
In the contextual cuing paradigm, a target letter is embedded among distractors, but a given distractor configuration with a fixed target position is repeated among trials of random distractor configurations and target positions (T among Ls; Figure 4a). Upon repeated presentations, search times are reduced to locate targets in previously viewed configurations (e.g., Figure 4b; Brockmole, Hambrick, Windisch, & Henderson, 2008; Chun, 2000; Chun & Jiang, 1998; Hollingworth, 2009). 
Figure 4
 
(a) Contextual cuing experiment where observers search for a T among Ls (reproduced from Chun, 2000). (b) Reaction time finding the target as a function of practice (epoch) for novel display configurations of target and distractors (green) and repeated configurations of target and distractors (Chun, 2000). (c) Scene context constrains the position of objects. Search for jeeps and helicopters (Neider & Zelinsky, 2006). (d) Object co-occurrence (chimney and house) can also guide search. If the search object appears at an unexpected location, it can have detrimental effect on search performance and often eye movements (points) are still directed to the contextual location (Eckstein, Dresher, & Shimozaki, 2006). (e) Search for people in real scenes. Bottom left image: Human fixations compared to a pure saliency model and a full model that includes contextual information about possible target locations (Torralba et al., 2006).
Figure 4
 
(a) Contextual cuing experiment where observers search for a T among Ls (reproduced from Chun, 2000). (b) Reaction time finding the target as a function of practice (epoch) for novel display configurations of target and distractors (green) and repeated configurations of target and distractors (Chun, 2000). (c) Scene context constrains the position of objects. Search for jeeps and helicopters (Neider & Zelinsky, 2006). (d) Object co-occurrence (chimney and house) can also guide search. If the search object appears at an unexpected location, it can have detrimental effect on search performance and often eye movements (points) are still directed to the contextual location (Eckstein, Dresher, & Shimozaki, 2006). (e) Search for people in real scenes. Bottom left image: Human fixations compared to a pure saliency model and a full model that includes contextual information about possible target locations (Torralba et al., 2006).
Similar improvements in visual search performance have been found after repeated presentation of search targets (a T or L) embedded at fixed positions within specific natural images (Brockmole & Henderson, 2006a; Ehinger & Brockmole, 2008). In addition, eye movements become increasingly directed toward the contextual locations predictive of the targets whether these are synthetic backgrounds (Peterson & Kramer, 2001) or real scenes (Brockmole & Henderson, 2006). Although contextual cuing has typically been treated as a very different effect than regular cuing, recent studies suggest that observers learn contextual information local to the target location (Brady & Chun, 2007; Jiang & Wagner, 2004), which might argue that contextual cuing is more similar to learned predictive spatially local cues than previously thought (but see Brockmole, Castelhano, & Henderson, 2006 for evidence for global processing). Temporal relationships among items and the target can also be learned and reduce search times (Olson & Chun, 2001). 
In a broader sense, context can also facilitate target detection when the target co-occurs in its natural (highly probable) context. Here, it is not an instance of a relationship between two items that is learned but rather a general relationship between visual patterns based on the organisms' experience with the statistics of the visual world. For example, a brief motion trajectory in moving dot noise is easier to detect when it occurs as a continuation of a preceding motion trajectory (Verghese & McKee, 2002), and a target patch is easier to detect when it is a continuation of a smooth contour (Verghese, 2009). 
Scene context and object–object co-occurrence
The structure of scenes and the naturally occurring locations of objects also constrain what are possible or likely locations of targets. For example, when looking for a helicopter in a dessert scene, one might be more likely to find the target in the sky than on land (Neider & Zelinsky, 2006, Figure 4c). Thus, rapid extraction of scene gist can reduce the uncertainty about the spatial location of the target. Indeed, eye movements are often guided toward expected locations in an image for a variety of natural scene types (Figures 4d and 4e; Castelhano & Heaven, 2010; Droll & Eckstein, 2008; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009; Hidalgo-Sotelo, Oliva, & Torralba, 2005; Neider & Zelinsky, 2006; Oliva, Torralba, Castelhano, & Henderson, 2003; Torralba, Oliva, Castelhano, & Henderson, 2006). The search facilitation is not restricted to global scene context. The presence of an object that tends to co-occur with the target can also guide eye movements and facilitate search (Figure 4d; Castelhano & Heaven, 2011; Eckstein, Drescher, & Shimozaki, 2006; Mack & Eckstein, 2011) even if it is semantically inconsistent with the scene (Castelhano & Heaven, 2011) and if the search is conducted in a three-dimensional environment rather than using two-dimensional images (Mack & Eckstein, 2011). 
Neural mechanisms: There is a large literature on how predictive cues, in the absence of eye movements, change neural activity. Cues modulate sensory-evoked responses in regions of visual cortex that represent cued locations, objects, and features (Brefczynski & DeYoe, 1999; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1990, 1991; Heinze et al., 1994; Maunsell & Cook, 2002; Shulman et al., 1999; Yantis et al., 2002). ERP studies have shown that the modulation can occur within 100 ms of the presentation of the cued stimulus (Luck et al., 1994; Mangun & Hillyard, 1990, 1991; Martinez et al., 1999; Voorhis & Hillyard, 1977). Moreover, regions of visual cortex show modulations in response to an attended cue before the presentation of the stimulus (Chawla, Rees, & Friston, 1999; Gandhi, Heeger, & Boynton, 1999; Giesbrecht, Weissman, Woldorff, & Mangun, 2006; Hopfinger, Buonocore, & Mangun, 2000; Kastner, Pinsk, De Weerd, Desimone, & Ungerleider, 1999; Ress, Backus, & Heeger, 2000), but also several high-order areas do as well (IPS; FEF; superior parietal lobules, SPL; prefrontal cortex, PFC; Corbetta, Kincade, Ollinger, McAvoy, & Shulman, 2000; Giesbrecht et al., 2003; Hopfinger et al., 2000; Kastner, De Weerd, Desimone, & Ungerleider, 1998; Kelley, Serences, Giesbrecht, & Yantis, 2008; Slagter et al., 2007; Yantis et al., 2002). For a thorough review on the effects of cues on neural activity, see Carrasco (2011), and for the relationship between cue-related neural activity and theoretical models, see Eckstein, Peterson, Pham, and Droll (2009). 
In the context of multielement search and target selection by saccades, SC single-unit activity decreases as the number of potential targets of a saccade increases (Basso & Wurtz, 1997). Similarly, FEF firing rate during a search for a T among Ls also decreases when increasing the number of distractors (Cohen, Heitz, Woodman, & Schall, 2009). Luck and Hillyard (1990) showed that the P3 component amplitude, thought to reflect processes involved in stimulus evaluation or categorization, decreases with increasing set size in target-present trials for a difficult search. A recent human fMRI study has shown that activity in the interparietal sulcus (IPS) and superior precentral sulcus (SPS) diminishes with increasing number of distractors (Jerde, Ikkai, & Curtis, 2011). 
In relation to contextual cuing, fMRI studies have shown that activation in the anterior prefrontal cortex, an area associated with high level executive functions, increases when the target location changes in displays with repeated distractor configurations (Pollmann & Manginelli, 2009). Patient (Chun & Phelps, 1999) and fMRI studies (Preston & Gabrieli, 2008; Turk-Browne, Scholl, Johnson, & Chun, 2010) also have related hippocampal activity to contextual cuing and processing statistical regularities in the environment. 
Optimized general properties of the visual system
Eye movements that take into account target visibility maps
The variation of target detection or discrimination with retinal eccentricity (i.e., visibility map) will depend both on properties of the visual system as well as the characteristics of the stimuli and the perceptual task. For example, if the task consists of contrast discrimination of a low spatial frequency target, then human task performance will be relatively unchanged with retinal eccentricity. On the other hand, if the task involves the detection of a high spatial frequency Gabor stimulus or an acuity judgment, then retinal eccentricity will have a large detrimental effect on accuracy and search times. Thus, the contributions of eye movements and foveation of the points of interest to search performance will depend on the task. Furthermore, for a given visibility map and task configuration, one can theoretically work out the optimal eye movement plans (ideal searcher) to maximize perceptual performance (Geisler & Cormack, 2011; Najemnik & Geisler, 2005). Recent studies have demonstrated that humans can take into account the varying visibility of a target across their visual field when planning visual saccades (but see Verghese, 2010 for exceptions). For example, measurements have shown that degradation of target detectability with retinal eccentricity is not constant across radial directions (anisotropic; Carrasco, Talgar, & Cameron, 2001) with a steeper decline along the vertical than the horizontal meridian. This lower detectability in the upper and lower fields leads humans to saccade to these areas more frequently during visual search in 1/f noise (Figures 5a5e; Najemnik & Geisler, 2008). The saccade lengths of humans are also compatible with a model that takes into account the target's visibility map (Najemnik & Geisler, 2008). 
Figure 5
 
Relative frequency of saccade landings for an Ideal Searcher (a), a Maximum a posteriori probability model (b), and (c,d) two individual observers (JN and WFG) and their (e) combined data (from Najemnik & Geisler, 2005). (f) Virtual evolution of perception and saccade templates built as linear combinations of V1 cells. Figure shows three scenarios with three different targets (1st row), the initial templates of a random individual at the initial stage of the virtual evolution (2nd row), and the evolved templates (3rd row; from Zhang & Eckstein, 2010).
Figure 5
 
Relative frequency of saccade landings for an Ideal Searcher (a), a Maximum a posteriori probability model (b), and (c,d) two individual observers (JN and WFG) and their (e) combined data (from Najemnik & Geisler, 2005). (f) Virtual evolution of perception and saccade templates built as linear combinations of V1 cells. Figure shows three scenarios with three different targets (1st row), the initial templates of a random individual at the initial stage of the virtual evolution (2nd row), and the evolved templates (3rd row; from Zhang & Eckstein, 2010).
Neural mechanisms: We currently do not know whether neurons in specific areas will take into account the visibility of a target across eccentricities in its computations at LIP, SC, or FEF. A fully optimal computation seems unlikely but more possible is the implementation of a computationally simpler model known as the Entropy Limit Minimization (ELM; Najemnik & Geisler, 2009), which, under certain circumstances, makes similar predictions to the ideal searcher. This ELM model might be implemented by local pooling of LIP neurons with a spatial pooling weighting function that corresponds to the square of the steepness of the visibility map (Najemnik & Geisler, 2009). 
Center–surround organization of search templates
Human saccades while viewing natural scenes are best predicted by center–surround mechanisms (Kienzle, Franz, Schölkopf, & Wichmann, 2009). Classification image studies with white noise show that the templates driving perception and saccades during search use information about the target but are also based on an inhibitory surround not present in the target luminance profile (e.g., Figure 3c; Abbey & Eckstein, 2002; Eckstein et al., 2007; Solomon, 2002). For the case of search in white noise, these inhibitory surrounds are somewhat perplexing because they are suboptimal. They reduce the match between the human template and that of an ideal observer. One possible explanation for the presence of the inhibitory surrounds is that they are a by-product of a visual system optimized for some other criteria. A number of theories have been proposed to explain the center–surround organization of cells. A general edge enhancement has been proposed as a desirable property of center–surround cells (Balboa & Grzywacz, 2000). Others have attributed the center–surround organization to the decorrelation (“whitening”) of the neuronal responses to natural images (Atick & Redlich, 1992; Srinivasan, Laughlin, & Dubs, 1982). Graham, Chandler, and Field (2006) suggest that the center–surround organization has unique properties leading to sparse coding. 9 Thus, the inhibitory surrounds of templates in white nose might simply reflect the optimization of one of these principles: sparseness in coding, detection of edges, or decorrelation. Recent studies using iterative optimization techniques (e.g., virtual evolution) have shown that the inhibitory surrounds not present in the target optimize the detection of the target when added or superimposed in images of natural scenes (rather than white noise; Zhang, Abbey, & Eckstein, 2009; Zhang & Eckstein, 2010). These findings suggest that the apparent suboptimality of inhibitory surrounds in human behavioral receptive fields when searching for a target in white noise might reflect a strategy to optimize detection of signals in natural scenes. 
Neural mechanisms: Center–surround organization of cells in early vision is prevalent in a wide range of vertebrates and invertebrates (Land & Nilsson, 2002). Inhibitory surrounds are prevalent in receptive fields in retinal ganglion cells, lateral geniculate, and primary visual cortex. The dynamic adaptation of properties of receptive fields to the statistics of the visual environment has been observed in the lateral geniculate of rabbits and salamander retinas (Hosoya et al., 2005) providing a possible mechanism to tune the surrounds to suppress prevalent spatial frequencies of natural scenes. It is possible that these inhibitory surrounds are reflected in areas involved in visual search including V4 and LIP. The possibility that the inhibitory surround is modulated based on task needs beyond an adaptation process is also feasible given recent evidence that attention changes center–surround organization of cells in MT (Anton-Erxleben, Stephan, & Treue, 2009). 
Similar mechanisms for perception and saccadic eye movements
Lesion studies with macaque monkeys (Snyder, Batista, & Andersen, 2000; Ungerleider & Mishkin, 1982) and patient studies (Goodale & Milner, 1992; Milner, 1997) support the concept of two functionally distinct neural pathways in the brain mediating the processing of visual information. Perceptual decisions are driven by the ventral stream projecting from the primary visual cortex to the inferior temporal cortex, while action is mediated by the dorsal stream projecting from the primary visual cortex to the posterior parietal cortex (Goodale & Milner, 1992; Milner, 1997). A prominent theory is that these two different pathways (ventral/dorsal streams) give rise to two distinct neural representations of the visual world in the brain (Goodale, Milner, Jakobson, & Carey, 1991). Others have suggested that the two pathways share much of the processing and lead to common representations for perception and action (Gegenfurtner, Xing, Scott, & Hawken, 2003; Krauzlis & Stone, 1999). 
In the context of visual search, eye movement actions are used to sample the visual world to subserve final search decisions, and thus, oculomotor actions and perception are intricately linked. Thus, what might be the consequences for search performance if the brain used different target representations (templates) to decide where to move the eyes and to make a final search decision? Theoretical analyses using an approximation to an ideal Bayesian searcher (ELM model) for multiple eye movement search with two separate streams, one controlling the eye movements and the other stream determining the perceptual search decisions, provides some answers. Zhang and Eckstein (2010; Figure 5f) used an iterative method to virtually evolve the neural mechanisms (linear templates) of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in various backgrounds (white noise, filtered noise, and natural scenes). They found that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations (Figure 5f) showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. However, do humans use similar templates for perceptual decisions and saccades during search? The classification image technique (Murray, 2011) can be used to estimate the underlying templates used by human saccades during search and that of a perceptual decision equated for visual processing time and elements' retinal eccentricity prior to an eye movement. Results show that humans use similar templates for saccades and perceptual decisions (Figure 3c) with the intraindividual saccade/perception template differences being smaller than the variability of the templates across observers (Eckstein et al., 2007). Together, these findings suggest that for the case of visual search, the templates mediating perceptual decision and eye movements are similar and this is what should be expected of an organism trying to optimize search performance (Zhang & Eckstein, 2010). 
Neural mechanisms: The finding that humans use similar templates for perception and saccadic actions during search does not necessarily imply that a single pathway mediates both perception and action nor are the findings incompatible with the existence of separate magnocellular and parvocellular pathways. However, the findings are consistent with the notion that pathways for perception and oculomotor control share visual information due to their large overlap (Dassonville & Bala, 2004; Gegenfurtner et al., 2003; Krauzlis & Stone, 1999; Stone & Krauzlis, 2003). For the case of saccadic eye movements, visual cortical pathways through the frontal eye fields (Schall, 1995) and the lateral intraparietal cortex (Goldberg, Bisley, Powell, Gottlieb, & Kusunoki, 2002) play critical roles, as well as brainstem and cortical pathways through the superior colliculus (McPeek & Keller, 2004). 
Target prevalence and rewards
Compensating for target prevalence
Optimizing overall perceptual performance across target-present and -absent trials (assuming equal value for both types of trials) requires adjusting the tendency to say “yes” or “no,” based on any prior information about the prevalence of the target. If the observer knows a priori or has learned through practice that the target is present in 90% of the trials, then simply saying “target present” more often than “target absent” will improve the proportion of trials with correct decisions. With certain assumptions, signal detection theory allows calculating the optimal criterion using the prior probabilities and the detectability of the signal (Green & Swets, 1989). Early psychophysical studies demonstrated how observers change their propensity to choose decisions based on prior probability of visual, auditory, and taste signal occurrence (Linker, Moore, & Galanter, 1964; Tanner, Swets, & Green, 1956). 
More recently, there has been an interest in determining whether the prevalence of the target changes how humans search in fundamental ways aside from the decision criterion. The classic studies indicated that for simple detection tasks observers changed their decision criterion without altering their sensitivity to the target (operating along the same receiver operating characteristic (ROC) with varying prevalence; Linker et al., 1964; Tanner et al., 1956). 
A study by Gur et al. (2003) used medical images with lesions to study the effect of target prevalence and concluded that it did not change the observers' ability to detect the target (area under the ROC was the same across prior probabilities) but did change the use of confidence ratings with lower prevalence increasing the confidence that the target was present (Gur et al., 2007). Using an artificial baggage-screening task, Wolfe, Horowitz, and Kenner (2005) found high miss rates when targets were rare (Figure 6a) but did not discuss or investigate whether these errors could be explained by a change in criterion consistent with previous literature. In a more recent study, errors with low target prior probabilities were attributed to two distinct processes related to the classic decision criterion shift (Green & Swets, 1989) but also to a quitting threshold (Wolfe & Van Wert, 2010). When prevalence is low and targets are rare, observers will quit search earlier, missing more targets. Fleck and Mitroff (2007) attributed misses in low target prevalence conditions to response-execution errors due to fast responses, not perceptual or identification errors. They provided observers an opportunity to correct their last response and showed that observers can catch their mistakes. A follow-up study showed that motor-response errors cannot account for all the errors and that shifts in decision criterion are needed to explain the increase in misses (Van Wert, Horowitz, & Wolfe, 2009; also see Rich et al., 2008 for types of errors based on the nature of the search task). 
Figure 6
 
(a) Percentage of targets missed as a function of number of objects in a simulated X-ray baggage search task with varying prevalence (black: low prevalence, 1%; gray: medium prevalence 10%; white: low prevalence, 50%; Wolfe et al., 2005). (b) Effect of target prior probabilities (top) and reward (bottom) on the saccade bias of a 2-alternative forced-choice saccade to target task (from Liston & Stone, 2008). (c) Top: ERP components N2pc and SPCN as a function of reward. Bottom: Topographic plots of activity for two temporal intervals (180–230 ms and 360–500 ms) for low and high rewards (reproduced from Kiss et al., 2009).
Figure 6
 
(a) Percentage of targets missed as a function of number of objects in a simulated X-ray baggage search task with varying prevalence (black: low prevalence, 1%; gray: medium prevalence 10%; white: low prevalence, 50%; Wolfe et al., 2005). (b) Effect of target prior probabilities (top) and reward (bottom) on the saccade bias of a 2-alternative forced-choice saccade to target task (from Liston & Stone, 2008). (c) Top: ERP components N2pc and SPCN as a function of reward. Bottom: Topographic plots of activity for two temporal intervals (180–230 ms and 360–500 ms) for low and high rewards (reproduced from Kiss et al., 2009).
Neural mechanisms: Little is known about the neural mechanisms mediating decision criterion shifts in response to target prevalence in visual search. In the domain of recognition memory, studies have suggested that activations associated with shifting criteria (due to target probability) are located in bilateral regions of the lateral cerebellum, lateral parietal lobe, and the dorsolateral prefrontal cortex extending from the supplementary motor area (Miller, Handy, Cutler, Inati, & Wolford, 2001). Studies have also shown that category criterion shifts or decision boundaries are reflected in activity of FEF neurons (Ferrera, Yanike, & Cassanello, 2009). A number of theoretical neuroscience papers have argued that a possible way to implement a decision criterion is to compare activity across neurons and make a decision based on the sign of the difference in activity (Gold & Shadlen, 2001). Changes in prevalence or signal prior probabilities can be implemented as an additive bias related to the logarithm of the prior probability (Gold & Shadlen, 2001). 
Adjusting search strategy based on rewards
Rewards and costs are central to visual search in the real world. Failing to find a target might have high costs with little consequences of false positives, while, in other instances, a false positive decision might have a higher cost (e.g., a major surgery following incorrectly identifying disease). Different targets might have different associated rewards (e.g., fruits with different nutritional value) and target locations might have different costs (e.g., finding a suspicious package next to a crowd of people has potentially more serious consequences than finding it in an open space). Classic studies (Green & Swets, 1989) have documented that with binary decisions human observers adjust their propensity to choose one decision category based on the reward (i.e., value) associated with each of the decisions in order to either maximize the benefits or minimize the cost. For multielement visual search, observers will also optimize their propensity to say “target present” based on the reward of finding the target (Navalpakkam, Koch, & Perona, 2009). When the search array contains various possible targets with varying associated rewards humans will optimize the accrued total reward by biasing their decision choices toward the high-reward targets (Navalpakkam, Koch, Rangel, & Perona, 2010). If making an eye movement to the target is rewarded, observers will also make more frequent saccades toward the high-reward targets than to lower reward targets (Navalpakkam et al., 2010). If finding the target at various locations is rewarded differently, observers will optimize their strategy by making more frequent saccades to high-reward locations (Figure 6b; Liston & Stone, 2008). If saccades toward a target region in space are rewarded and surrounded by a cost region, observers will compensate their eye movement plans to avoid the cost region by taking into account the inherent variability of their saccade landing positions (Stritzke et al., 2009). However, saccades with short latencies are less driven by the reward value and more by saliency (Stritzke et al., 2009). Saccade latencies are also shortened toward locations associated with higher reward values (Sohn & Lee, 2006) or higher expected values (a product of probability of reward and reward magnitude; Milstein & Dorris, 2007). 
All of these studies directly rewarded the saccades, which is arguably different than the real-world scenario for which the goal of eye movements is to gather visual information to support subsequent rewarded decisions and/or actions. There is evidence that suggests that even when the final search decision, rather than the eye movement, is rewarded differently across locations, observers will still bias their eye movements to maximize the accrued rewards in a follow-up decision (Eckstein, Schoonveld, & Zhang, 2010). 
Neural mechanisms: Single-cell recording studies have shown that the activity of neurons in LIP is influenced by the rewards associated with potential saccade targets (Dorris & Glimcher, 2004; Platt & Glimcher, 1999; Sugrue, Corrado, & Newsome, 2004). Furthermore, LIP activity reflects some subjective value that is a non-linear transformation of the reward value (Dorris & Glimcher, 2004). Studies suggest that the reward signal might originate in the basal ganglia (Hikosaka, Nakamura, & Nakahara, 2006) and is also present in the caudate nucleus (Lau & Glimcher, 2007; for a review on rewards, see Trommershäuser, Glimcher, & Gegenfurtner, 2009). Leon and Shadlen (1999) showed that the dorsolateral prefrontal cortex (area 46) is modulated by reward but not the FEF. However, later studies using memory-guided saccades to targets of varying reward have found modulations in FEF activity by reward (Ding & Hikosaka, 2006; Roesch & Olson, 2003). SC neuronal activity is also modulated both in anticipation of an expected reward at the neuron's response field and also reactively to the visual presence of a stimulus that indicates an upcoming reward (Ikeda & Hikosaka, 2003). In addition, supplementary motor area neurons, an area related to movements of body and limbs, is also modulated by expectation of reward in the post-eye movement period of oculomotor tasks (Campos, Breznen, Bernheim, & Andersen, 2005). 
Event-related studies have also shown that the N2pc ERP component is modulated by the reward assigned to various targets defined by different color features with the component appearing earlier and with larger amplitude with increasing reward (Figure 6c; Kiss, Driver, & Eimer, 2009). 
Visual search as a tool to study models of covert attention
Covert attention refers to the ability to select part of the visual scene for processing irrespective of point of fixation. Visual search is arguably one of the most influential paradigms to study covert visual attention (the others being dual tasks and cuing paradigms). The main manipulation in this subfield is to vary the number of distractors among which the target is embedded and measure either the time to find the target (reaction time; Neisser, 1964) or the accuracy in detecting the target when the element display is briefly presented (Estes & Taylor, 1964). Typically, the degradation of performance (time or accuracy) with the total number of elements in the display (i.e., set-size effect) is used to draw conclusions about how covert attention operates and selects visual information. Is visual attention deployed in a temporally serial manner, selecting one item at a time, or does it process all items simultaneously in parallel? At the core of this debate is also what limits search performance as the number of search items (i.e., distractors) increases. This paradigm can be traced back to early work in the 1960s by Neisser, Estes (Estes & Taylor, 1964), and Shiffrin (Shiffrin & Gardner, 1972; Shiffrin & Schneider, 1984) among others. Below, I discuss the main models put forward to explain set-size effects and their defining components. 
Serial covert attention models
Feature integration theory
In this model, visual attention is a temporally serial mechanism that randomly chooses and processes one item at a time. On average, we would expect search times to linearly increase with set size. A linear increasing function of response times with set size gave support to this model of covert attention. The serial model of visual attention was further popularized in the 1980s with the rise of Feature Integration Theory (FIT; Treisman & Gelade, 1980), which, in its inception, proposed the concept that individual features were processed in parallel (pre-attentively) and visual attention, which operated serially from item to item, was needed to assign or bind different features to an item. 
The theory was put forward to explain the dichotomy between the shallow set-size functions of feature search displays (Figure 7a) vs. the steep (serial-like) set-size functions for conjunction search displays (Figure 7b). In conjunction displays, the target shares physical attributes (referred to as features) with two different types of distractors and is defined only by the joint presence of two features (conjunction; Figure 7b). In feature displays, a single physical attribute differentiates the target from all distractors (Figure 7a). The theory was also used to explain the drastic effect of on search difficulty when interchanging the assignment of the same two items as either target or distractor (e.g., search asymmetries; Dosher, Han, Lu, 2010; Treisman & Gormican, 1988; Wolfe, 2001). For example, search (response times and/or search accuracy) for a tilted line among vertical line distractors is rather easy and insensitive to the number of distractors (parallel search) while looking for vertical line among tilted distractors is difficult. Similar effects are obtained with a variety of items (Wolfe, 2001) but not all. 
Figure 7
 
(a) A feature search display for which the target can be distinguished from all distractors along one physical attribute (feature). (b) Conjunction display for which the target shares attributes with each distractor and for which the target can only be distinguished from all distractors by the joint presence of two features (color and shape). (c) Reaction time vs. set-size search slopes for a variety of displays (from Wolfe, 1998). (d) Proportion correct as a function of set size for briefly presented search displays for a target known search and an oddity search for three different observers. Continuous lines are fits of an ideal single fixation observer (from Schoonveld et al., 2007).
Figure 7
 
(a) A feature search display for which the target can be distinguished from all distractors along one physical attribute (feature). (b) Conjunction display for which the target shares attributes with each distractor and for which the target can only be distinguished from all distractors by the joint presence of two features (color and shape). (c) Reaction time vs. set-size search slopes for a variety of displays (from Wolfe, 1998). (d) Proportion correct as a function of set size for briefly presented search displays for a target known search and an oddity search for three different observers. Continuous lines are fits of an ideal single fixation observer (from Schoonveld et al., 2007).
To account for criticisms related to uncontrolled target–distractor differences and distractor–distractor similarity (Duncan & Humphreys, 1989), FIT underwent later refinements in which the serial processor (attention) was also needed for difficult feature searches (Treisman, 1991). A feature inhibition mechanism was also proposed to account for fast search for highly discriminable conjunctions (Treisman & Sato, 1990). 
Serial attention guided by parallel processing (guided search)
The large number of counterexamples arguing against a strict dichotomy between serial and parallel search gave rise to a more nuanced model that could result in a range of search time dependencies on set size (i.e., search slopes; Figure 7c). The Guided Search (GS) model (Wolfe, Cave, & Franzel, 1989) can find its roots in previous two-stage models such as those introduced by Hoffman (1978) 10 and Neisser (1964). The model has included various updates and changes through the years. The original GS model retains FIT's covert attention as a serial processor, but, unlike FIT, attention is not randomly deployed across items in the display but is guided by parallel processing across the visual field of the elements in the display. The model (GS 2.0; Wolfe, 1994) starts with a stochastic processing of items determined by an interaction between bottom-up properties of the elements and top-down effect. Display items are further processed if they exceed an activation threshold. Attention then serially processes one item at a time starting with the item with the highest activation and proceeding to the 2nd highest activation and so on. Importantly, if serial attention processes an item, it can determine with 100% accuracy whether the item is the target or a distractor. After a certain search time, the model quits, and if serial attention has not processed the target, it guesses allowing for false alarm decisions. Increasing the target–distractor discriminability will increase the probability that the target will be ranked highly in terms of activation, and thus, serial attention will reach the target faster, reducing predicted reaction times. 
In its latest update (GS 4.0; Wolfe, 2007), the model is expanded to an array of new phenomena including guidance by scene statistics through a pathway that is not subject to an attentional bottleneck. An important change in GS 4.0 is the inclusion of a drift diffusion process and noise that replaces the error-free attentional processor. In GS4.0, serial attention processes one item at a time, which then enters the diffusion process. In a standard diffusion process, evidence accumulates in a stochastic fashion for each item. This allows the model to both generate errors even if attention processes the item and false alarm decisions in target-absent trials without resorting to guessing as in GS2.0. Multiple items can be in the diffusion process at a time if it does not exceed some capacity limit. When the number of items exceeds the capacity limit, then an item has to be dismissed before another one enters the diffusion process. Thus, even with the change, GS4.0 preserves an attentional selection bottleneck to predict a variety of effects (see Wolfe for discussion of arguments). 
Parallel limited capacity models of search
Even before the popularization of serial attention in the 1980s with FIT, work in the late 1960s and 1970s discussed how the increasing reaction times with set size could be accounted for not only by a serial processor but also by various parallel models. The limited capacity parallel model assumes that all the items are processed at once in parallel but that the rate of processing of one item might be affected by simultaneous processing of additional items (Snodgrass & Townsend, 1980). In this way, the larger the number of items to be processed, the slower the rate of processing per item leading to increasing overall reaction times. Snodgrass and Townsend (1980) and Townsend (1972) have pursued various types of parallel models that can predict response times as well as a serial processor and have emphasized the difficulty in distinguishing serial vs. parallel models. The theory of visual attention (TVA; Bundesen, 1990; Bundesen et al., 2005) can also be regarded as a limited resources parallel model. In TVA, the rate of processing depends on the attentional resources that are divided across items or allocated differentially across items (Bundesen, 1990). In the neural-based version of the theory (Neural theory of visual attention, NTV; Bundesen et al., 2005), each sensor is modeled as a Poisson process with attention increasing the signal-to-noise ratio. 11 The more items in the display, the fewer attentional resources that can be allocated to each item leading to a smaller increase in mean rate of the Poisson process that leads to a lower signal-to-noise ratio. 
Unlimited capacity parallel models with stochastic noise
Signal detection models
A third category of models starts with the notion of stochastic processing of display elements and has its roots in classic spatial vision and the theory of signal detectability (signal detection theory, SDT; Green & Swets, 1989; Peterson, Birdsall, & Fox, 1954). 12 The concept of noise as limiting performance can be traced back to Blackwell (1946). Two consequences arise from the adoption of stochastic processing as an inherent property of models: (a) Increasing the number of distractors will increase the probability that any one distractor is confused with the target degrading performance without assuming any change (limited capacity) in visual processing of individual items; (b) eliminating distractors or directing attention via cues to a subset of items will improve behavioral performance. Many signal detection models will show these properties even if the decision rules are not optimal. Although some early researchers applied these models to problems of detecting signal in noise (Burgess & Ghandeharian, 1984b; Swensson & Judy, 1981), their applicability and relevance to the mainstream visual search literature in the 1980s was limited, arguably because the use of the reaction time paradigm, in which observers make few decision errors, does not motivate the consideration the stochastic nature of visual processing. In the 1970s and 1980s, Kinchla (1974), Shaw (1980, 1984), and Sperling and Dosher (1986) investigated these models and were influential in shaping a variety of studies by Palmer et al. (1993, 2000) Palmer (1994). Palmer applied the models to briefly presented displays and a variety of visual tasks. These types of models have been able to explain a variety of findings including feature search (Palmer, 1994; Palmer et al., 1993, 2000), feature/conjunction dissociations (Eckstein, 1998; Eckstein, Thomas, Palmer, & Shimozaki, 2000), triple conjunctions (Eckstein et al., 2000), oddity search (Figure 7d; Schoonveld et al., 2007), distractor heterogeneity (Dosher et al., 2010; Vincent et al., 2009), search asymmetries (Vincent, 2011a), orientation identification (Baldassi & Burr, 2000; Baldassi & Verghese, 2002; Cameron et al., 2004), and target localization in noisy displays (Bochud et al., 2004; Burgess, 1985; Swensson & Judy, 1981). 
Bayesian ideal observer
For simple tasks, implementation of a signal detection model is straightforward and its relationship to an optimal ideal observer is well understood based on classic work (Green & Swets, 1989; Peterson et al., 1954). As the tasks become more complex (e.g., identification of a target among distractors, oddity search, distractor uncertainty), the number of possible ways to integrate noisy sensory data across multiple locations and features can give rise to a variety of signal detection models with distinct predictions (Baldassi & Burr, 2000; Baldassi & Verghese, 2002; Cameron et al., 2004; Shimozaki, Eckstein, & Abbey, 2003b; Vincent et al., 2009). Thus, in recent years, SDT models of visual search have been formalized under a normative Bayesian ideal framework that, under assumptions about statistical properties of the responses, leads to an unambiguous, unique decision rule and prediction (Eckstein, Pham, & Shimozaki, 2004; Eckstein, Shimozaki, & Abbey, 2002; Vincent, 2011b). The Bayesian ideal observer (BIO) for single fixation search does not necessarily have to best predict human performance as a function of set size but can serve as a standard of comparison against human performance and a variety of SDT models based on simple heuristics (maximum of outputs model, standard deviation model, etc.). Finally, recent implementations of these Bayesian optimal search models have been proposed in the context of biologically plausible mechanisms (Eckstein et al., 2009) and population codes (Ma, Navalpakkam, Beck, van den Berg, & Pouget, 2011). 13  
Summary
Arguably, a fair summary of the state of the field of modeling of covert attention in visual search is to suggest that it is undeniable that covert attention can provide great improvements to search accuracy via either selection or differential weighting of task-relevant locations/visual information. Similarly, adding additional distractors will degrade performance due to the stochastic processing of each item in the display. Many of these results in the literature can be accounted for without resorting to either a serial or limited capacity parallel mechanism. However, there is also clear evidence that for some more complex tasks, cuing fewer locations can lead to benefits beyond what is predicted by these SDT/BIO models, or conversely, adding distractors can have a detrimental effect on performance beyond what is predicted by the unlimited capacity parallel models (Carrasco, 2011; Davis, Shikano, Peterson, & Keyes Michel, 2003; Palmer et al., 1993; Palmer, Fencsik, Flusberg, Horowitz, & Wolfe, 2011; Põder, 1999; Shaw, 1984). 
Moving forward, it should be clear that studies of covert attention cannot afford to confound contributions of eye movements and other low-level visual effects to search performance. Early experiments that sought to infer properties about covert attention during search were mostly conducted measuring response times as a function of set size without controlling for eye movements, element eccentricity, or element density. Drawing strong conclusions about the nature of covert attention requires controlling for these factors either by using brief displays or eye trackers. A final note is that full integration of covert attention within a multiple fixation model has been less common in the field. It is likely that future efforts will concentrate on integrating covert attention and eye movement models. 
Models of eye movements during visual search
What guides eye movements during search has been an important question for over three decades. Unlike the debate about whether covert attention is serial and/or limited capacity, there is a consensus that the human foveated visual system is indeed, for most tasks, a limited resource with high resolution at the fovea and diminishing acuity toward the periphery. The deployment of the high-resolution fovea can only be accomplished in a temporally serial manner to one location in space. Arguably, the earliest debate in the field was whether eye movements (and the fovea) during visual search were guided toward the target. There seems to be ample evidence suggesting that eye movements are indeed guided by information about the sought target (Beutter, Eckstein, & Stone, 2003; Findlay, 1997; Findlay, Brown, & Gilchrist, 2001). Still, the last decades have seen a variety of distinct computational models with distinct factors determining the point of fixation during search. 
Saliency
An influential family of models of attention falls into the category of saliency models. These models propose that both covert attention and eye movements are directed toward regions that are visually salient to observers. The original model (Itti & Koch, 2000; Itti, Koch, & Niebur, 1998; Koch & Ullman, 1985) extracts, in parallel, feature activity for each sampled point relative to its spatial surround and across spatial scales (Nakayama & Martini, 2011). This results in feature contrast maps along luminance, color, and orientation dimensions. A weighted sum across features for each location results in a feature-aggregate contrast map known as the saliency map. The model then uses the saliency map to determine the order of eye movements starting with the most salient point and then moving to other points in decreasing order of saliency. It is fair to state that even though the original saliency model could predict saccade endpoints above chance (Parkhurst, Law, & Niebur, 2002), so can a simple algorithm biased to saccade to the center of the image (Tatler, 2007). There are many studies suggesting that saliency models are not good predictors of eye movement endpoints when human observers are engaged in a specific task such as search (Einhäuser, Rutishauser, & Koch, 2008; Foulsham & Underwood, 2008; Pomplun, 2006; Torralba et al., 2006; see, in this special anniversary issue of JOV, Tatler et al., 2011 for a review on saliency). In particular, the original model does not differentiate between salient features that define the sought target from those that are task-irrelevant. This makes pure bottom-up saliency-type models capable of predicting saccades in simple displays with sparse elements (Li, 2002) but less successful for more complex scenes that contain task-relevant and task-irrelevant salient regions. Since the original proposal, there has been a large number of alternative implementations of the saliency model (Gao & Vasconcelos, 2009; Zhang, Tong, Marks, Shan, & Cottrell, 2008). Bruce and Tsotsos (2009) proposed a saliency model that computes regions of an image that are most informative or least redundant relative to the visual information in the rest of the scene (attention by information maximization). The Zhang et al. model uses a calculation relative not only to information in the image but also to statistics from other images. In this sense, the model calculates a measure of that the region in the image is unusual relative to all other images (see also Itti & Baldi, 2009 for a related concept). The original saliency model has also been extended to dynamic movies including new features such as motion and abrupt appearance of objects (Carmi & Itti, 2006). 
Saccadic targeting models
A second family of models assumes that saccades are directed toward target-like elements in the display or regions that contain target-type features. This is arguably the most prevalent type of model and encompasses a great number of proposed models (Beutter et al., 2003; Eckstein et al., 2006; Ehinger et al., 2009; Findlay, 1997; Pomplun, 2006; Rao, Zelinsky, Hayhoe, & Ballard, 2002; Zelinsky, 2008). Models can differ on whether the similarity measure between the representation of the target and the incoming visual information is computed in image space, feature space, or in some neurobiologically plausible filter space. In addition, some models explicitly incorporate the degradation of performance with retinal eccentricity, while other models do not by restricting themselves to displays that contain elements at equal retinal eccentricity. Common to all these models is that a saccade is made toward the region containing the most sensory evidence of the presence of the target. I refer to these as saccadic targeting models. The notion that eye movement mechanisms use top-down information about the target is well supported by a variety of studies (see Knowledge of target visual properties section in this article). 
Maximum a posteriori probability model (MAP)
A formalization of the saccadic targeting model is given by the maximum a posteriori probability model (Beutter et al., 2003; Eckstein et al., 2001; Najemnik & Geisler, 2008). This model calculates, for all locations possibly containing the target, a posterior probability (or some monotonic transformation such as the log-likelihood ratio) that the target is at that location and distractors at the remaining locations. The model then makes a saccade toward the location with the highest evidence of containing the target (highest posterior probability; see Figure 8 for an example of MAP saccades). There are many studies that indeed show that human behavior seems to be consistent with this strategy. The model can be further expanded to include information about varying prior probabilities that a location contains the target (Droll et al., 2009) or highly visible cues (e.g., other objects or features) that are predictive of the target location. An aspect of human saccades that does not seem to be predicted by these models is that, in some circumstances, humans will not make an eye movement to the most likely target location but instead move to a location that is less likely but closer to the current point of fixation (Araujo, Kowler, & Pavel, 2001). MAP models could be amended to account for these findings by adding a cost function to larger saccades over shorter saccades. 
Figure 8
 
Models of eye movements. Columns 1 and 2 are for an 8 AFC target localization task in white noise with different visibility maps (column 1, steep visibility map; column 2, broader visibility map). Column 3 corresponds to a 4 AFC configuration with uncued locations (black circles) having zero probability of containing the target. Rows 2, 3, and 4 correspond to predictions of three models: Saccadic targeting (maximum a posteriori probability model, MAP; Beutter et al., 2003), Ideal Searcher (IS; Najemnik & Geisler, 2005), and Entropy Limit Minimization (ELM; Najemnik & Geisler, 2009). Location of fixations for 1st (blue) and 2nd saccades (red) for three models (MAP, IS, and ELM). The MAP model simulations include small random saccade endpoint errors to facilitate visualization of the different fixations. Central cross indicates initial fixation point for all models (reproduced from Zhang & Eckstein, 2010).
Figure 8
 
Models of eye movements. Columns 1 and 2 are for an 8 AFC target localization task in white noise with different visibility maps (column 1, steep visibility map; column 2, broader visibility map). Column 3 corresponds to a 4 AFC configuration with uncued locations (black circles) having zero probability of containing the target. Rows 2, 3, and 4 correspond to predictions of three models: Saccadic targeting (maximum a posteriori probability model, MAP; Beutter et al., 2003), Ideal Searcher (IS; Najemnik & Geisler, 2005), and Entropy Limit Minimization (ELM; Najemnik & Geisler, 2009). Location of fixations for 1st (blue) and 2nd saccades (red) for three models (MAP, IS, and ELM). The MAP model simulations include small random saccade endpoint errors to facilitate visualization of the different fixations. Central cross indicates initial fixation point for all models (reproduced from Zhang & Eckstein, 2010).
Targeting with center-of-gravity saccades (Target Acquisition model)
The MAP model also does not capture the finding that humans can sometimes execute saccades that land in the center-of-gravity location between display elements (He & Kowler, 1989; McGowan, Kowler, Sharma, & Chubb, 1998; Zelinsky & Sheinberg, 1997). To accommodate the finding that early saccades during search are often directed in between objects, a number of models have proposed that the saccade is directed toward a weighted spatial average (centroid) of responses on a map of sensory evidence (Findlay & Walker, 1999; Zelinsky, 2008). A recent model, the Target Acquisition model, uses top-down knowledge to guide saccades toward the target combined with the averaging of spatial positions (centroid) and an inhibition of explored target-absent locations to generate a coarse-to-fine strategy of eye movements with initial eye movements being more center of gravity and later saccades toward the target (Zelinsky, 2008). 
Ideal searcher
The MAP model is indeed optimal if the objective was to foveate the target such as the laboratory tasks for which many non-human primates are trained. However, in real-world tasks, the goal of eye movements is to gather visual information to subserve a later perceptual decision after multiple fixation search. It is not always the case that foveating the location with the highest evidence of containing the target is the eye movement strategy that maximizes perceptual search performance. As a simple example, consider a task in which a target may appear at one of two possible locations and the observer has only time to make one saccade before making a perceptual decision and deciding whether the target is present or absent. In addition, let us presume that the detectability of the target drops at an increasing rate with increasing retinal eccentricity. In this case, fixating in between the two possible target locations might provide a compromise by processing both locations with little degradation in target detectability. This could lead to better perceptual performance when compared to a strategy of fixating one of the two possible target locations, which allows foveating one location but leaves the other location to be processed with greatly degraded detectability. 
Najemnik and Geisler (2005) took note of these important conceptual distinctions and developed an ideal searcher that takes into account how detectability degrades with eccentricity (visibility map), the search display configuration, and sensory information about the target to make fixations that maximize search perceptual performance. The model calculates the expected performance achieved in a perceptual decision under all possible fixations using prior information gathered about the possible target location. It then moves the fovea to the point of fixation that maximizes expected perceptual performance (see Figure 8 for examples of ideal searcher for 8 AFC and 4 AFC with different visibility maps). Although finding optimal sequences of saccade endpoints suffers from a combinatorial explosion, Najemnik and Geisler (2005) argue that finding the single next optimal fixation will be a good approximation. Among various things, this model predicts a higher frequency of saccades toward upper and lower visual fields (relative to visual fields horizontally lateral to fixation), center-of-mass saccades, and average saccade lengths that are comparable to human (Najemnik & Geisler, 2005, 2008). Under many scenarios, the ideal searcher and MAP model will lead to very similar perceptual performance levels and eye movements (see 1st column of Figure 8; Zhang & Eckstein, 2010), while for other scenarios, the models will make distinct predictions (Figure 8; 2nd and 3rd columns). In some circumstances, such as search for multiple targets (Verghese, 2010), humans will adopt an MAP strategy even if it departs from the ideal. 
Entropy minimization models
Another family of models attempts to minimize the uncertainty of the possible states of the stimulus (Legge, Klitz, & Tjan, 1997; Renninger, Coughlan, Verghese, & Malik, 2005; Renninger, Verghese, & Coughlan, 2007) or a subset of states considered for the decision (Najemnik & Geisler, 2009). Information theory formally relates the concept of reduction in uncertainty to information gain. A common measure of uncertainty is given by entropy metrics that quantify the degree to which the probability of various states (or hypotheses: target present vs. absent, location of target, target type) is similar. When each of the possible states have similar probability, then the uncertainty is high (high entropy), while when few states have high associated probabilities and the remaining states have low probabilities, this indicates low uncertainty (low entropy). Renninger et al. (2007) used an entropy minimization model (also see Legge et al., 1997) for a shape identification task and showed that saccades are directed toward points that minimize local uncertainty rather than global uncertainty. In the context of visual search, Najemnik and Geisler (2009) developed an entropy minimization model for a search target localization task. For this task and under some assumptions, they were able to simplify the ELM model to a simple algorithm that takes spatial weighted averages (convolution) of the sensory evidence (posterior probabilities) with a function given by the square of the visibility map. The model results in a similar pattern of results to that of the ideal searcher (Najemnik & Geisler, 2009; Zhang & Eckstein, 2010; see Figure 8). The contribution of the simplified version of the ELM model is that it is a simple computation that is more likely to be implemented in the brain than a more complex ideal searcher algorithm. 
Visual search in the real world
Supplementary Materials presents interviews with experts in three different real-world search tasks: (1) a radiologist who scrutinizes X-ray images and computed tomography images for abnormalities and disease (Figure 9a); (2) an analyst who inspects satellite images looking for intelligence-related targets; (3) a fisherman who searches the ocean surface from a hill looking for schools of various fish types (Figures 9b and 9c). Boxes 1 through 3 present the interviews conducted through email for the radiologist and the satellite image analyst, while for the fisherman, it was a live interview. In summarizing these interviews, it is of value to relate them to mechanisms and strategies discussed in this paper. Knowledge of target is mentioned as key to successful search by all three experts: Dr. Kundel (expert radiologist) discusses the importance of learning by showing to residents images of proved cases along with actual anatomical and pathological specimens. The satellite image analyst highlights target knowledge as the most important factor and relates common analyst mistakes to lack of knowledge for some types of targets. The fisherman identifies the redness on the ocean surface as the main visual feature indicating the presence of the school of sardines. Contextual cues and/or scene context are also mentioned as an important factor facilitating search. The context of the analyzed target in terms of surrounding structures seems critical in identifying the targets in satellite imagery. Understanding the spatial relationships and likely locations are discussed as important: “Understanding where things are supposed to be helps you search faster.” For the fisherman, specific birds diving toward the ocean have a particularly important role in serving as a cue to guide search toward the likely location of the school of sardines. 
Figure 9
 
(a) Eye movements after 3 and 14 s for a radiologist scrutinizing a chest X-ray (reproduced from Kundel & Wright, 1969). (b) Expert fishermen and watchmen from the town of Pampatar in Margarita Island, Venezuela with experiences ranging from 12 to 50 years. Interview with Ramon Moncho Labori (bottom left picture) is reported in the current paper. (c) Photograph of the ocean on a clear day while fishermen capture the school of sardines. Red areas enclosed by the fishing boats signal the presence of the school of sardines. Darker areas in the bottom of the image are due to ocean topography, algae, rocks, etc. (d) On May 11, 1997, IBM's Deep Blue defeated world champion of chess Gary Kasparov.
Figure 9
 
(a) Eye movements after 3 and 14 s for a radiologist scrutinizing a chest X-ray (reproduced from Kundel & Wright, 1969). (b) Expert fishermen and watchmen from the town of Pampatar in Margarita Island, Venezuela with experiences ranging from 12 to 50 years. Interview with Ramon Moncho Labori (bottom left picture) is reported in the current paper. (c) Photograph of the ocean on a clear day while fishermen capture the school of sardines. Red areas enclosed by the fishing boats signal the presence of the school of sardines. Darker areas in the bottom of the image are due to ocean topography, algae, rocks, etc. (d) On May 11, 1997, IBM's Deep Blue defeated world champion of chess Gary Kasparov.
Distractor/background knowledge is also mentioned as an important asset to minimize errors or save time. The satellite image analyst states: “Knowing what things look like in your part of the world saves lots of time in terms of time spent researching strange things you may come across.” The fisherman discusses how experience with the site of search allows him to discount distractor spots created by the ocean topography (stones and rocks) that mimic the redness of the sardines. He also mentions ways to ignore red spots produced by other distractor fish based on the behavior of the diving birds. Thus, here, the cue (the bird) not only serves to indicate the likely location of the sardines but it also helps discriminating the target from the distractor. Together, the interviews reveal that even though the specifics of the search tasks are incredibly different, expertise for all three tasks is based on a complex set of knowledge about targets, backgrounds, and context. Proficiency in each of these search tasks requires organisms to have evolved brain machinery to learn visual properties about the environments and also prolonged task-specific experience to learn knowledge unique to each visual environment. 
Epilogue: The library of Babel, sardines in the Caribbean, and IBM's Deep Blue
In Jorge Luis Borges' story, the library of Babel (see quote at the beginning of this review paper) is composed of “an indefinite and perhaps infinite number of hexagonal galleries, with vast air shafts between, surrounded by very low railings.” The inquisitors, expert searchers, aim at finding meaningful books among distractor books containing all possible combinations of letters, and thus forming sequences of nonsense words. Their eternal and fruitless search arises from the uncertainty and lack of structure in their search space and lack of cues characterizing target books with meaningful text embedded in the immense number of distractor books containing letters in random orders. In many ways, this is likely what organisms would face if their brains had not evolved mechanisms that are able to learn statistical regularities in the environment about properties of the targets, distractors, likely locations, and contextual cues. It is difficult to imagine such experience, but one can recreate how fruitless search can be when one first encounters an unfamiliar specialty task. Without knowing the properties of the targets and backgrounds, search becomes a futile effort in which one defaults to fixate salient structures. As an example, Figure 9a shows the pattern of eye movements of an expert radiologist scrutinizing a chest X-ray. Would one, without having radiological expertise, make such eye movements? Clearly, this is not the case as studies have shown how expertise affects eye movement patterns (Kundel & LaFollete, 1972). Similarly, would one, without expertise, discriminate between the school of sardines and shades related to rocks and underwater vegetation in Figure 9c
Not knowing the crucial set of features of the target and distractors were likely the challenges facing the development of an algorithm to find the interstellar dust particles in NASA's collector plate. A computer vision algorithm could surely be developed, but it would have required a great amount of energy, time, and resources. Even then, it is not certain that the computer would have outperformed humans. Take, for example, computer-aided detection in medical imagery, which has been in development for over twenty years. Although there is some consensus that a computer aid can benefit the radiologists' classification of an abnormality, there is less agreement whether human performance benefits from the computer marking possible lesion locations (Eadie, Taylor, & Gibson, 2011; Fenton et al., 2011). Does this imply that machines will not exceed humans in their abilities to perform visual search? Similar debates have taken place before, such as the debate about whether a machine would ever beat a chess master. The issue was put to rest when IBM's Deep Blue defeated Gary Kasparov in a best-of-six game in May 1997 (Figure 9d). 14 Thus, the likely answer is that it is a matter of time until machines are inspecting baggage at airports, along with medical and satellite images. In the meantime, understanding the strategies, limitations, computations, and neural mechanisms of search by biological organisms will allow us not only to gain knowledge about a critical behavior in human and non-human animals, but it might also help improve human performance in life-critical search tasks and the development of better artificial search systems to aid humans or even replace them altogether. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Supplementary Figure - Supplementary Figure 
Supplementary Figure - Supplementary Figure 
Supplementary Figure - Supplementary Figure 
Supplementary Figure - Supplementary Figure 
Acknowledgments
I thank Cruz Acosta and Luis Miguel Martinez for their help in interviewing the local fisherman in Margarita Island, Venezuela during December 2007. Without their help, these interviews would not have been possible. I would also like to thank the National Geospatial Agency for facilitating the interview with the satellite image analyst; Harold Kundel, our anonymous satellite image analyst, and Moncho Albori for their thoughtful and thorough answers; Andrew Westpahl from the Berkeley Space Laboratory for filling me in on the details about the challenges of the Stardust Particle Project. James Bisley and Rich Krauzlis helped with my inquiries about neurophysiology and Barry Giesbrecht with ERP and fMRI studies. Matt Peterson, Steve Mack, and Emre Akbas provided insightful comments on drafts of the paper and help in manuscript preparation. Preeti Verghese and an anonymous 2nd reviewer provided great assistance in clarifying points and improving the paper. This work was funded by NSF 0819592, NIH-NEI EY015925, Army Grant W911NF-09-d-0001, and IC Grant 2011-11071400005 to M.E. 
Commercial relationships: none. 
Corresponding author: Miguel P. Eckstein. 
Email: eckstein@psych.ucsb.edu. 
Address: Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93116, USA. 
Footnotes
Footnotes
1  Here, the phrase “all the time” is used not in a literal sense but rather as a colloquialism to refer to “often”.
Footnotes
2  The samples identified as potential particles would be further analyzed to determine whether their components were consistent with interstellar dust.
Footnotes
3  Although the Berkeley group had some analog shots from laboratory experiments to simulate interstellar dust, these had questionable fidelity and there was uncertainty whether they would reflect how the dust particles collected in space would look.
Footnotes
4  A few exceptions such as a faint dot in scotopic light levels and flickering lights at certain frequencies will be better detected in the visual periphery.
Footnotes
5  Of course, the saliency of the target increases the probability of its detection. However, such reasoning already implicitly involves task-relevant goals since one is considering the target relative to the background. Such reasoning does not qualify as considering “saliency” in isolation of task-relevant goals.
Footnotes
6  Efficiency in this paper is used to refer to both the ratio of ideal to human squared contrast thresholds but also to the slope of the response times (RTs) vs. set-size function.
Footnotes
7  The map has been referred to as a saliency map in the past. However, the term saliency, in a strict sense, has been traditionally associated with the visibility of image properties irrespective of behaviorally relevant goals or task. However, there is ample evidence that all three areas LIP, SC, and FEF are modulated by the task, reward, and prior probabilities. Thus, more recent treatments (Bisley & Goldberg, 2010) refer to the map as a priority map, which, appropriately, is a term more inclusive of the organisms' behavioral goals.
Footnotes
8  To make this comparison meaningful, target–distractor similarities need to be controlled and kept constant across both tasks (oddity search and target–distractor known search).
Footnotes
9  Sparse coding is one in which each stimulus elicits strong activation from a relatively small set of neurons, while most neuronal units are mostly inactive.
Footnotes
10  One important difference between the original Hoffman model and the original Guided Search is that the Hoffman model did not contain an activation threshold while GS does.
Footnotes
11  In the TVA model, the mean rate of the process is proportional to the attention allocated to the item. Because for a Poisson process the variance is proportional to the mean, the difference in mean responses between a target and a distractor relative to the standard deviation of the responses (i.e., signal-to-noise ratio) increases as the mean rates of the processes increase (while keeping a constant difference). This will increase the signal-to-noise ratio.
Footnotes
12  Unlimited capacity parallel models can also predict increasing response times with search items. When the comparison times are distributed exponentially and are independent, the time for exhaustive scanning of n items increases approximately with log of the number of items (Snodgrass & Townsend, 1980).
Footnotes
13  Although the unlimited capacity parallel model is arguably the most common SDT-based model, nothing prevents the SDT or even the BIO framework from including various hypothesized attentional components. In this sense, the modeling of covert attention can resemble sequential ideal observer modeling applied to the subfield of spatial vision (Geisler & Davila, 1985). The idea is to include hypothetical limitations into an ideal observer-type model. For example, a model could include in addition to the inherent noise in processing a serial attention (Eckstein, 1998) component or limited resources such as increasing variability in the processing of an individual item as the set size increases (Eckstein et al., 2009; Palmer et al., 2000).
Footnotes
14  Deep Blue won 3 games, lost 2, and drew 1.
References
Abbey C. K. Eckstein M. P. (2002). Classification image analysis: Estimation and statistical inference for two-alternative forced-choice experiments. Journal of Vision, 2(1):5, 66–78, http://www.journalofvision.org/content/2/1/5, doi:10.1167/2.1.5. [PubMed] [Article] [CrossRef]
Abbey C. K. Eckstein M. P. (2007). Classification images for simple detection and discrimination tasks in correlated noise. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 24, B110–B124. [CrossRef]
Anton-Erxleben K. Stephan V. M. Treue S. (2009). Attention reshapes center–surround receptive field structure in macaque cortical area MT. Cerebral Cortex, 19, 2466–2478. [CrossRef] [PubMed]
Araujo C. Kowler E. Pavel M. (2001). Eye movements during visual search: The costs of choosing the optimal path. Vision Research, 41, 3613–3625. [CrossRef] [PubMed]
Atick J. J. Redlich A. N. (1992). What does the retina know about natural scenes? Neural Computation, 4, 196–210. [CrossRef]
Avraham T. Yeshurun Y. Lindenbaum M. (2008). Predicting visual search performance by quantifying stimuli similarities. Journal of Vision, 8(4):9, 1–22, http://www.journalofvision.org/content/8/4/9, doi:10.1167/8.4.9. [PubMed] [Article] [CrossRef] [PubMed]
Balan P. F. Gottlieb J. (2009). Functional significance of nonspatial information in monkey lateral intraparietal area. Journal of Neuroscience, 29, 8166–8176. [CrossRef] [PubMed]
Balboa R. M. Grzywacz N. M. (2000). The role of early retinal lateral inhibition: More than maximizing luminance information. Visual Neuroscience, 17, 77–89. [CrossRef] [PubMed]
Baldassi S. Burr D. C. (2000). Feature-based integration of orientation signals in visual search. Vision Research, 40, 1293–1300. [CrossRef] [PubMed]
Baldassi S. Verghese P. (2002). Comparing integration rules in visual search. Journal of Vision, 2(8):3, 559–570, http://www.journalofvision.org/content/2/8/3, doi:10:1167/2.8.3. [PubMed] [Article] [CrossRef]
Basso M. A. Wurtz R. H. (1997). Modulation of neuronal activity by target uncertainty. Nature, 389, 66–69. [CrossRef] [PubMed]
Beutter B. R. Eckstein M. P. Stone L. S. (2003). Saccadic and perceptual performance in visual search tasks: I. Contrast detection and discrimination. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 20, 1341–1355. [CrossRef]
Bisley J. W. Goldberg M. E. (2010). Attention, intention, and priority in the parietal lobe. Annual Review of Neuroscience, 33, 1–21. [CrossRef] [PubMed]
Bisley J. W. Mirpour K. Arcizet F. Ong W. S. (2011). The role of the lateral intraparietal area in orienting attention and its implications for visual search. European Journal of Neuroscience, 33, 1982–1990. [CrossRef] [PubMed]
Blackwell H. R. (1946). Contrast thresholds of the human eye. Journal of the Optical Society of America A, 36, 624–643. [CrossRef]
Bochud F. O. Abbey C. K. Eckstein M. P. (2004). Search for lesions in mammograms: Statistical characterization of observer responses. Medical Physics, 31, 24–36. [CrossRef] [PubMed]
Bond A. B. (1983). Visual search and selection of natural stimuli in the pigeon: The attention threshold hypothesis. Journal of Experimental Psychology: Animal Behavior Processes, 9, 292–306. [CrossRef] [PubMed]
Brady T. F. Chun M. M. (2007). Spatial constraints on learning in visual search: Modeling contextual cuing. Journal of Experimental Psychology: Human Perception and Performance, 33, 798–815. [CrossRef] [PubMed]
Brady T. F. Konkle T. Alvarez G. A. (2011). A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision, 11(5):4, 1–34, http://www.journalofvision.org/content/11/5/4, doi:10.1167/11.5.4. [PubMed] [Article] [CrossRef] [PubMed]
Bravo M. J. Farid H. (2009). The specificity of the search template. Journal of Vision, 9(1):34, 1–9, http://www.journalofvision.org/content/9/1/34, doi:10.1167/9.1.34. [PubMed] [Article] [CrossRef] [PubMed]
Brefczynski J. A. DeYoe E. A. (1999). A physiological correlate of the “spotlight” of visual attention. Nature Neuroscience, 2, 370–374. [CrossRef] [PubMed]
Brockmole J. R. Castelhano M. S. Henderson J. M. (2006). Contextual cueing in naturalistic scenes: Global and local contexts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 699–706. [CrossRef] [PubMed]
Brockmole J. R. Hambrick D. Z. Windisch D. J. Henderson J. M. (2008). The role of meaning in contextual cueing: Evidence from chess expertise. Quarterly Journal of Experimental Psychology, 61, 1886–1896. [CrossRef]
Brockmole J. R. Henderson J. M. (2006). Recognition and attention guidance during contextual cueing in real-world scenes: Evidence from eye movements. Quarterly Journal of Experimental Psychology, 59, 1177–1187. [CrossRef]
Brosnan T. (2004). Improving quality inspection of food products by computer vision—A review. Journal of Food Engineering, 61, 3–16. [CrossRef]
Bruce N. D. B. Tsotsos J. K. (2009). Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9(3):5, 1–24, http://www.journalofvision.org/content/9/3/5, doi:10.1167/9.3.5. [PubMed] [Article] [CrossRef] [PubMed]
Bundesen C. (1990). A theory of visual attention. Psychological Review, 97, 523–547. [CrossRef] [PubMed]
Bundesen C. Habekost T. Kyllingsbaek S. (2005). A neural theory of visual attention: Bridging cognition and neurophysiology. Psychological Review, 112, 291–328. [CrossRef] [PubMed]
Burgess A. (1985). Visual signal detection: III. On Bayesian use of prior knowledge and cross correlation. Journal of the Optical Society of America A: Optics and Image Science, 2, 1498–1507. [CrossRef]
Burgess A. E. Ghandeharian H. (1984a). Visual signal detection: I Ability to use phase information. Journal of the Optical Society of America A: Optics and Image Science, 1, 900–905. [CrossRef]
Burgess A. E. Ghandeharian H. (1984b). Visual signal detection: II. Signal-location identification. Journal of the Optical Society of America A: Optics and Image Science, 1, 906–910. [CrossRef]
Burgess A. E. Li X. Abbey C. K. (1997). Visual signal detectability with two noise components: Anomalous masking effects. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 14, 2420–2442. [CrossRef]
Busey T. Palmer J. (2008). Set-size effects for identification versus localization depend on the visual search task. Journal of Experimental Psychology: Human Perception and Performance, 34, 790–810. [CrossRef] [PubMed]
Cameron E. L. Tai J. C. Eckstein M. P. Carrasco M. (2004). Signal detection theory applied to three visual search tasks—Identification, yes/no detection and localization. Spatial Vision, 17, 295–325. [CrossRef] [PubMed]
Campos M. Breznen B. Bernheim K. Andersen R. A. (2005). Supplementary motor area encodes reward expectancy in eye-movement tasks. Journal of Neurophysiology, 94, 1325–1335. [CrossRef] [PubMed]
Carmi R. Itti L. (2006). Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 46, 4333–4345. [CrossRef] [PubMed]
Carrasco M. (2011). Visual attention: The past 25 years. Vision Research, 51, 1484–1525. [CrossRef] [PubMed]
Carrasco M. Evert D. L. Chang I. Katz S. M. (1995). The eccentricity effect: Target eccentricity affects performance on conjunction searches. Perception & Psychophysics, 57, 1241–1261. [CrossRef] [PubMed]
Carrasco M. Talgar C. P. Cameron E. L. (2001). Characterizing visual performance fields: Effects of transient covert attention, spatial frequency, eccentricity, task and set size. Spatial Vision, 15, 61–75. [CrossRef] [PubMed]
Castelhano M. S. Heaven C. (2010). The relative contribution of scene context and target features to visual search in scenes. Attention, Perception & Psychophysics, 72, 1283–1297. [CrossRef] [PubMed]
Castelhano M. S. Heaven C. (2011). Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin & Review, 18, 890–896. [CrossRef] [PubMed]
Castella C. Eckstein M. P. Abbey C. K. Kinkel K. Verdun F. R. Saunders R. S. et al. (2009). Mass detection on mammograms: Influence of signal shape uncertainty on human and model observers. Journal of the Optical Society of America A, 26, 425–436. [CrossRef]
Chawla D. Rees G. Friston K. J. (1999). The physiological basis of attentional modulation in extrastriate visual areas. Nature Neuroscience, 2, 671–676. [CrossRef] [PubMed]
Chun M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178. [CrossRef] [PubMed]
Chun M. M. Jiang Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. [CrossRef] [PubMed]
Chun M. M. Phelps E. A. (1999). Memory deficits for implicit contextual information in amnesic subjects with hippocampal damage. Nature Neuroscience, 2, 844–847. [CrossRef] [PubMed]
Cohen J. Y. Heitz R. P. Woodman G. F. Schall J. D. (2009). Neural basis of the set-size effect in frontal eye field: Timing of attention during visual search. Journal of Neurophysiology, 101, 1699–1704. [CrossRef] [PubMed]
Cohn T. E. Lasley D. J. (1974). Detectability of a luminance increment: Effect of spatial uncertainty. Journal of the Optical Society of America, 64, 1715–1719. [CrossRef] [PubMed]
Cohn T. E. Wardlaw J. C. (1985). Effect of large spatial uncertainty on foveal luminance increment detectability. Journal of the Optical Society of America A: Optics and Image Science, 2, 820–825. [CrossRef]
Colby C. L. Gattass R. Olson C. R. Gross C. G. (1988). Topographical organization of cortical afferents to extrastriate visual area PO in the macaque: A dual tracer study. Journal of Comparative Neurology, 269, 392–413. [CrossRef] [PubMed]
Corbetta M. Kincade J. M. Ollinger J. M. McAvoy M. P. Shulman G. L. (2000). Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nature Neuroscience, 3, 292–297. [CrossRef] [PubMed]
Corbetta M. Miezin F. Dobmeyer S. Shulman G. Petersen S. (1990). Attentional modulation of neural processing of shape, color, and velocity in humans. Science, 248, 1556–1559. [CrossRef] [PubMed]
Corbetta M. Miezin F. Dobmeyer S. Shulman G. Petersen S. (1991). Selective and divided attention during visual discriminations of shape, color, and speed: Functional anatomy by positron emission tomography. Journal of Neuroscience, 11, 2383–2402. [PubMed]
Corbetta M. Shulman G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3, 201–215. [CrossRef] [PubMed]
Daniel P. M. Whitteridge D. (1961). The representation of the visual field on the cerebral cortex in monkeys. The Journal of Physiology, 159, 203–221. [CrossRef] [PubMed]
Dassonville P. Bala J. K. (2004). Perception, action, and Roelofs effect: A mere illusion of dissociation. PLoS Biology, 2, e364.
Davis E. T. Graham N. (1981). Spatial frequency uncertainty effects in the detection of sinusoidal gratings. Vision Research, 21, 705–712. [CrossRef] [PubMed]
Davis E. T. Kramer P. Graham N. (1983). Uncertainty about spatial frequency, spatial position, or contrast of visual patterns. Perception & Psychophysics, 33, 20–28. [CrossRef] [PubMed]
Davis E. T. Shikano T. Peterson S. A. Keyes Michel R. (2003). Divided attention and visual search for simple versus complex features. Vision Research, 43, 2213–2232. [CrossRef] [PubMed]
Ding L. Hikosaka O. (2006). Comparison of reward modulation in the frontal eye field and caudate of the macaque. Journal of Neuroscience, 26, 6695–6703. [CrossRef] [PubMed]
Dorris M. C. Glimcher P. W. (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron, 44, 365–378. [CrossRef] [PubMed]
Dosher B. A. Han S. Lu Z.-L. (2010). Information-limited parallel processing in difficult heterogeneous covert visual search. Journal of Experimental Psychology: Human Perception and Performance, 36, 1128–1144. [CrossRef] [PubMed]
Droll J. A. Abbey C. K. Eckstein M. P. (2009). Learning cue validity through performance feedback. Journal of Vision, 9(2):18, 1–23, http://www.journalofvision.org/content/9/2/18, doi:10.1167/9.2.18. [PubMed] [Article] [CrossRef] [PubMed]
Droll J. A. Eckstein M. P. (2008). Expected object position of two hundred fifty observers predicts first fixations of seventy seven separate observers during search [Abstract]. Journal of Vision, 8(6):320, 320a, http://www.journalofvision.org/content/8/6/320, doi:10.1167/8.6.320. [CrossRef]
Druker M. Anderson B. (2010). Spatial probability aids visual stimulus discrimination. Frontiers in Human Neuroscience, 4, 63. [PubMed]
Duncan J. Humphreys G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458. [CrossRef] [PubMed]
Duncan R. O. Boynton G. M. (2003). Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron, 38, 659–671. [CrossRef] [PubMed]
Eadie L. H. Taylor P. Gibson A. P. (2011). A systematic review of computer-assisted diagnosis in diagnostic cancer imaging. European Journal of Radiology.
Eckstein M. P. (1998). The lower efficiency for conjunctions is due to noise and not serial attentional processing. Psychological Science, 9, 111–118. [CrossRef]
Eckstein M. P. Abbey C. K. (2001). Model observers for signal-known-statistically tasks (SKS). Proceedings of the SPIE Medical Imaging, 4324, 91–102.
Eckstein M. P. Beutter B. R. Pham B. T. Shimozaki S. S. Stone L. S. (2007). Similar neural representations of the target for saccades and perception during search. Journal of Neuroscience, 27, 1266–1270. [CrossRef] [PubMed]
Eckstein M. P. Beutter B. R. Stone L. S. (2001). Quantifying the performance limits of human saccadic targeting during visual search. Perception, 30, 1389–1401. [CrossRef] [PubMed]
Eckstein M. P. Drescher B. A. Shimozaki S. S. (2006). Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychological Science: A Journal of the American Psychological Society (APS), 17, 973–980. [CrossRef]
Eckstein M. P. Peterson M. F. Pham B. T. Droll J. A. (2009). Statistical decision theory to relate neurons to behavior in the study of covert visual attention. Vision Research, 49, 1097–1128. [CrossRef] [PubMed]
Eckstein M. P. Pham B. T. Shimozaki S. S. (2004). The footprints of visual attention during search with 100% valid and 100% invalid cues. Vision Research, 44, 1193–1207. [CrossRef] [PubMed]
Eckstein M. P. Schoonveld W. Zhang S. (2010). Optimizing eye movements in search for rewards [Abstract]. Journal of Vision, 10(7):33, 33a, http://www.journalofvision.org/content/10/7/33, doi:10.1167/10.7.33. [CrossRef]
Eckstein M. P. Shimozaki S. S. Abbey C. K. (2002). The footprints of visual attention in the Posner cueing paradigm revealed by classification images. Journal of Vision, 2(1):3, 25–45, http://www.journalofvision.org/content/2/1/3, doi:10:1167/2.1.3. [PubMed] [Article] [CrossRef]
Eckstein M. P. Thomas J. P. Palmer J. Shimozaki S. S. (2000). A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays. Perception & Psychophysics, 62, 425–451. [CrossRef] [PubMed]
Eckstein M. P. Whiting J. S. (1996). Visual signal detection in structured backgrounds: I. Effect of number of possible spatial locations and signal contrast. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 13, 1777–1787. [CrossRef]
Eckstein M. P. Whiting J. S. Thomas J. P. (1996). Role of knowledge in human visual temporal integration in spatiotemporal noise. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 13, 1960–1968. [CrossRef]
Egner T. Monti J. M. P. Trittschuh E. H. Wieneke C. A. Hirsch J. Mesulam M.-M. (2008). Neural integration of top-down spatial and feature-based information in visual search. Journal of Neuroscience, 28, 6141–6151. [CrossRef] [PubMed]
Ehinger K. A. Brockmole J. R. (2008). The role of color in visual search in real-world scenes: Evidence from contextual cuing. Perception & Psychophysics, 70, 1366–1378. [CrossRef] [PubMed]
Ehinger K. A. Hidalgo-Sotelo B. Torralba A. Oliva A. (2009). Modeling Search for People in 900 Scenes: A combined source model of eye guidance. Visual Cognition, 17, 945–978. [CrossRef] [PubMed]
Eimer M. (1996). The N2pc component as an indicator of attentional selectivity. Electroencephalography and Clinical Neurophysiology, 99, 225–234. [CrossRef] [PubMed]
Eimer M. Kiss M. Nicholas S. (2011). What top-down task sets do for us: An ERP study on the benefits of advance preparation in visual search. Journal of Experimental Psychology: Human Perception and Performance.
Einhäuser W. Rutishauser U. Koch C. (2008). Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. Journal of Vision, 8(2):2, 1–19, http://www.journalofvision.org/content/8/2/2, doi:10.1167/8.2.2. [PubMed] [Article] [CrossRef] [PubMed]
Eriksen C. W. Yeh Y. Y. (1985). Allocation of attention in the visual field. Journal of Experimental Psychology: Human Perception and Performance, 11, 583–597. [CrossRef] [PubMed]
Estes W. D. Taylor H. A. (1964). A detection method and probabilistic models for assessing information processing from brief visual displays. Proceedings of the National Academy of Sciences of the United States of America, 52, 446–454. [CrossRef] [PubMed]
Fecteau J. H. Munoz D. P. (2006). Salience, relevance, and firing: A priority map for target selection. Trends in Cognitive Sciences, 10, 382–390. [CrossRef] [PubMed]
Fenton J. J. Abraham L. Taplin S. H. Geller B. M. Carney P. A. D'Orsi C. et al. (2011). Effectiveness of computer-aided detection in community mammography practice. Journal of the National Cancer Institute, 103, 1152–1161. [CrossRef] [PubMed]
Ferrera V. P. Yanike M. Cassanello C. (2009). Frontal eye field neurons signal changes in decision criteria. Nature Neuroscience, 12, 1458–1462. [CrossRef] [PubMed]
Findlay J. M. (1997). Saccade target selection during visual search. Vision Research, 37, 617–631. [CrossRef] [PubMed]
Findlay J. M. Brown V. Gilchrist I. D. (2001). Saccade target selection in visual search: The effect of information from the previous fixation. Vision Research, 41, 87–95. [CrossRef] [PubMed]
Findlay J. M. Gilchrist I. D. (2003). Active vision: The psychology of looking and seeing (1st ed.). USA: Oxford University Press.
Findlay J. M. Walker R. (1999). A model of saccade generation based on parallel processing and competitive inhibition. Behavioral and Brain Sciences, 22, 661–674; discussion 674–721. [PubMed]
Fleck M. S. Mitroff S. R. (2007). Rare targets are rarely missed in correctable search. Psychological Science, 18, 943–947. [CrossRef] [PubMed]
Folk C. L. Remington R. W. Johnston J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044. [CrossRef] [PubMed]
Folk C. L. Remington R. W. Wright J. H. (1994). The structure of attentional control: Contingent attentional capture by apparent motion, abrupt onset, and color. Journal of Experimental Psychology: Human Perception and Performance, 20, 317–329. [CrossRef] [PubMed]
Foulsham T. Underwood G. (2008). What can saliency models predict about eye movements Spatial and sequential aspects of fixations during encoding and recognition. Journal of Vision, 8(2):6, 1–17, http://www.journalofvision.org/content/8/2/6, doi:10.1167/8.2.6. [PubMed] [Article] [CrossRef] [PubMed]
Gandhi S. P. Heeger D. J. Boynton G. M. (1999). Spatial attention affects brain activity in human primary visual cortex. Proceedings of the National Academy of Sciences, 96, 3314–3319. [CrossRef]
Gao D. Vasconcelos N. (2009). Decision-theoretic saliency: Computational principles, biological plausibility, and implications for neurophysiology and psychophysics. Neural Computation, 21, 239–271. [CrossRef] [PubMed]
Gegenfurtner K. R. Xing D. Scott B. H. Hawken M. J. (2003). A comparison of pursuit eye movement and perceptual performance in speed discrimination. Journal of Vision, 3(11):19, 865–876, http://www.journalofvision.org/content/3/11/19, doi:10:1167/3.11.19. [PubMed] [Article] [CrossRef]
Geisler W. S. Chou K.-L. (1995). Separation of low-level and high-level factors in complex tasks: Visual search. Psychological Review, 102, 356–378. [CrossRef] [PubMed]
Geisler W. S. Cormack L. (2011). Models of overt attention. In Liversedge, S. P. Gilchrist, I. D. Everling S. (Eds.), Oxford handbook of eye movements. New York: Oxford University Press.
Geisler W. S. Davila K. D. (1985). Ideal discriminators in spatial vision: Two-point stimuli. Journal of the Optical Society of America A, 2, 1483–1497. [CrossRef]
Geng J. J. Behrmann M. (2005). Spatial probability as an attentional cue in visual search. Perception & Psychophysics, 67, 1252–1268. [CrossRef] [PubMed]
Geyer T. Müller H. J. Krummenacher J. (2008). Expectancies modulate attentional capture by salient color singletons. Vision Research, 48, 1315–1326. [CrossRef] [PubMed]
Giesbrecht B. Weissman D. H. Woldorff M. G. Mangun G. R. (2006). Pre-target activity in visual cortex predicts behavioral performance on spatial and feature attention tasks. Brain Research, 1080, 63–72. [CrossRef] [PubMed]
Giesbrecht B. Woldorff M. G. Song A. W. Mangun G. R. (2003). Neural mechanisms of top-down control during spatial and feature attention. NeuroImage, 19, 496–512. [CrossRef] [PubMed]
Gold J. I. Shadlen M. N. (2001). Neural computations that underlie decisions about sensory stimuli. Trends in Cognitive Sciences, 5, 10–16. [CrossRef] [PubMed]
Goldberg M. E. Bisley J. W. Powell K. D. Gottlieb J. (2006). Saccades, salience and attention: The role of the lateral intraparietal area in visual behavior. Progress in Brain Research, 155, 157–175. [PubMed]
Goldberg M. E. Bisley J. Powell K. D. Gottlieb J. Kusunoki M. (2002). The role of the lateral intraparietal area of the monkey in the generation of saccades and visuospatial attention. Annals of the New York Academy of Sciences, 956, 205–215. [CrossRef] [PubMed]
Goodale M. A. Milner A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. [CrossRef] [PubMed]
Goodale M. A. Milner A. D. Jakobson L. S. Carey D. P. (1991). A neurological dissociation between perceiving objects and grasping them. Nature, 349, 154–156. [CrossRef] [PubMed]
Gottlieb J. Kusunoki M. Goldberg M. E. (1998). The representation of visual salience in monkey parietal cortex. Nature, 391, 481–484. [CrossRef] [PubMed]
Graham D. J. Chandler D. M. Field D. J. (2006). Can the theory of “whitening” explain the center–surround properties of retinal ganglion cell receptive fields? Vision Research, 46, 2901–2913. [CrossRef] [PubMed]
Green D. M. Swets J. A. (1989). Signal detection theory and psychophysics. Los Altos, CA: Peninsula Publishing.
Greenberg A. S. Esterman M. Wilson D. Serences J. T. Yantis S. (2010). Control of spatial and feature-based attention in frontoparietal cortex. Journal of Neuroscience, 30, 14330–14339. [CrossRef] [PubMed]
Guo F. Das K. Giesbrecht B. Eckstein M. P. (2010). Neural correlates of visual search in natural scenes [Abstract]. Journal of Vision, 10(7):1313, 1313a, http://www.journalofvision.org/content/10/7/1313, doi:10.1167/10.7.1313. [CrossRef]
Gur D. Bandos A. I. Fuhrman C. R. Klym A. H. King J. L. Rockette H. E. (2007). The prevalence effect in a laboratory environment: Changing the confidence ratings. Academic Radiology, 14, 49–53. [CrossRef] [PubMed]
Gur D. Rockette H. E. Armfield D. R. Blachar A. Bogan J. K. Brancatelli G. et al. (2003). Prevalence effect in a laboratory environment. Radiology, 228, 10–14. [CrossRef] [PubMed]
Hayhoe M. Ballard D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9, 188–194. [CrossRef] [PubMed]
He P. Y. Kowler E. (1989). The role of location probability in the programming of saccades: Implications for “center-of-gravity” tendencies. Vision Research, 29, 1165–1181. [CrossRef] [PubMed]
Heinze H.-J. Luck S. J. Munte T. F. Gös A. Mangun G. R. Hillyard S. A. (1994). Attention to adjacent and separate positions in space: An electrophysiological analysis. Perception & Psychophysics, 56, 42–52. [CrossRef] [PubMed]
Hickey C. Di Lollo V. McDonald J. J. (2009). Electrophysiological indices of target and distractor processing in visual search. Journal of Cognitive Neuroscience, 21, 760–775. [CrossRef] [PubMed]
Hidalgo-Sotelo B. Oliva A. Torralba A. (2005). Human learning of contextual priors for object search: Where does the time go? In Computer Vision and Pattern Recognition Workshop (p. 86). Los Alamitos, CA: IEEE Computer Society.
Hikosaka O. Nakamura K. Nakahara H. (2006). Basal Ganglia orient eyes to reward. Journal of Neurophysiology, 95, 567–584. [CrossRef] [PubMed]
Hilimire M. R. Mounts J. R. W. Parks N. A. Corballis P. M. (2011). Dynamics of target and distractor processing in visual search: Evidence from event-related brain potentials. Neuroscience Letters, 495, 196–200. [CrossRef] [PubMed]
Hillstrom A. P. Yantis S. (1994). Visual motion and attentional capture. Perception & Psychophysics, 55, 399–411. [CrossRef] [PubMed]
Hoffman J. E. (1978). Search through a sequentially presented visual display. Perception & Psychophysics, 23, 1–11. [CrossRef] [PubMed]
Hollingworth A. (2009). Two forms of scene memory guide visual search: Memory for scene context and memory for the binding of target object to scene location. Visual Cognition, 17, 273–291. [CrossRef]
Hopfinger J. B. Buonocore M. H. Mangun G. R. (2000). The neural mechanisms of top-down attentional control. Nature Neuroscience, 3, 284–291. [CrossRef] [PubMed]
Hosoya T. Baccus S. A. Meister M. (2005). Dynamic predictive coding by the retina. Nature, 436, 71–77. [CrossRef] [PubMed]
Husain M. Nachev P. (2007). Space and the parietal cortex. Trends in Cognitive Sciences, 11, 30–36. [CrossRef] [PubMed]
Ikeda T. Hikosaka O. (2003). Reward-dependent gain and bias of visual responses in primate superior colliculus. Neuron, 39, 693–700. [CrossRef] [PubMed]
Ipata A. E. Gee A. L. Gottlieb J. Bisley J. W. Goldberg M. E. (2006). LIP responses to a popout stimulus are reduced if it is overtly ignored. Nature Neuroscience, 9, 1071–1076. [CrossRef] [PubMed]
Itti L. Baldi P. (2009). Bayesian surprise attracts human attention. Vision Research, 49, 1295–1306. [CrossRef] [PubMed]
Itti L. Koch C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. [CrossRef] [PubMed]
Itti L. Koch C. Niebur E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259. [CrossRef]
Jerde T. A. Ikkai A. Curtis C. E. (2011). The search for the neural mechanisms of the set size effect. The European Journal of Neuroscience, 33, 2028–2034. [CrossRef] [PubMed]
Jiang Y. Wagner L. C. (2004). What is learned in spatial contextual cuing-configuration or individual locations? Perception & Psychophysics, 66, 454–463. [CrossRef] [PubMed]
Jonides J. Yantis S. (1988). Uniqueness of abrupt visual onset in capturing attention. Perception & Psychophysics, 43, 346–354. [CrossRef] [PubMed]
Judy P. F. (1995). Observer detection efficiency with target size uncertainty. Proceedings of SPIE (pp. 10–17). Paper presented at the Medical Imaging 1995: Image Perception, San Diego, CA, USA.
Kastner S. De Weerd P. Desimone R. Ungerleider L. G. (1998). Mechanisms of directed attention in the human extrastriate cortex as revealed by functional MRI. Science, 282, 108–111. [CrossRef] [PubMed]
Kastner S. Pinsk M. A. De Weerd P. Desimone R. Ungerleider L. G. (1999). Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron, 22, 751–761. [CrossRef] [PubMed]
Kelley T. A. Serences J. T. Giesbrecht B. Yantis S. (2008). Cortical mechanisms for shifting and holding visuospatial attention. Cerebral Cortex, 18, 114–125. [CrossRef] [PubMed]
Kienzle W. Franz M. O. Schölkopf B. Wichmann F. A. (2009). Center–surround patterns emerge as optimal predictors for human saccade targets. Journal of Vision, 9(5):7, 1–15, http://www.journalofvision.org/content/9/5/7, doi:10.1167/9.5.7. [PubMed] [Article] [CrossRef] [PubMed]
Kinchla R. A. (1974). Detecting target elements in multielement arrays: A confusability model. Perception & Psychophysics, 15, 149–158. [CrossRef]
Kiss M. Driver J. Eimer M. (2009). Reward priority of visual target singletons modulates event-related potential signatures of attentional selection. Psychological Science, 20, 245–251. [CrossRef] [PubMed]
Klein R. Farrell M. (1989). Search performance without eye movements. Perception & Psychophysics, 46, 476–482. [CrossRef] [PubMed]
Koch C. Ullman S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology, 4, 219–227. [PubMed]
Krauzlis R. J. Dill N. (2002). Neural correlates of target choice for pursuit and saccades in the primate superior colliculus. Neuron, 35, 355–363. [CrossRef] [PubMed]
Krauzlis R. J. Stone L. S. (1999). Tracking with the mind's eye. Trends in Neurosciences, 22, 544–550. [CrossRef] [PubMed]
Kundel H. L. Wright D. J. (1969). The influence of prior knowledge on visual search strategies during the viewing of chest radiographs. Radiology, 93, 315–320. [CrossRef] [PubMed]
Land M. F. (1999). Motion and vision: Why animals move their eyes. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, 185, 341–352. [CrossRef]
Land M. F. (2009). Vision, eye movements, and natural behavior. Visual Neuroscience, 26, 51–62. [CrossRef] [PubMed]
Land M. F. Hayhoe M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41, 3559–3565. [CrossRef] [PubMed]
Land M. F. Nilsson D. (2002). Animal eyes.. Oxford, UK: Oxford University Press.
Lau B. Glimcher P. W. (2007). Action and outcome encoding in the primate caudate nucleus. Journal of Neuroscience, 27, 14502–14514. [CrossRef] [PubMed]
Legge G. E. Klitz T. S. Tjan B. S. (1997). Mr Chips: An ideal-observer model of reading. Psychological Review, 104, 524–553. [CrossRef] [PubMed]
Leon M. I. Shadlen M. N. (1999). Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron, 24, 415–425. [CrossRef] [PubMed]
Levi D. M. Klein S. A. Aitsebaomo A. P. (1985). Vernier acuity, crowding and cortical magnification. Vision Research, 25, 963–977. [CrossRef] [PubMed]
Li Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Sciences, 6, 9–16. [CrossRef] [PubMed]
Linker E. Moore M. E. Galanter E. (1964). Taste thresholds, detection models, and disparate results. Journal of Experimental Psychology, 67, 59–66. [CrossRef] [PubMed]
Liston D. B. Stone L. S. (2008). Effects of prior information and reward on oculomotor and perceptual choices. Journal of Neuroscience, 28, 13866–13875. [CrossRef] [PubMed]
Lovejoy L. P. Krauzlis R. J. (2010). Inactivation of primate superior colliculus impairs covert selection of signals for perceptual judgments. Nature Neuroscience, 13, 261–266. [CrossRef] [PubMed]
Luck S. J. Hillyard S. A. (1990). Electrophysiological evidence for parallel and serial processing during visual search. Perception & Psychophysics, 48, 603–617. [CrossRef] [PubMed]
Luck S. J. Hillyard S. A. (1994a). Spatial filtering during visual search: Evidence from human electrophysiology. Journal of Experimental Psychology: Human Perception and Performance, 20, 1000–1014. [CrossRef]
Luck S. J. Hillyard S. A. (1994b). Electrophysiological correlates of feature analysis during visual search. Psychophysiology, 31, 291–308. [CrossRef]
Luck S. J. Hillyard S. A. Mouloua M. Woldorff M. G. Clark V. P. Hawkins H. L. (1994). Effects of spatial cuing on luminance detectability: Psychophysical and electrophysiological evidence for early selection. Journal of Experimental Psychology: Human Perception and Performance, 20, 887–904. [CrossRef] [PubMed]
Ludwig C. J. H. Eckstein M. P. Beutter B. R. (2007). Limited flexibility in the filter underlying saccadic targeting. Vision Research, 47, 280–288. [CrossRef] [PubMed]
Ludwig C. J. H. Ranson A. Gilchrist I. D. (2008). Oculomotor capture by transient events: A comparison of abrupt onsets, offsets, motion, and flicker. Journal of Vision, 8(14):11, 1–16, http://www.journalofvision.org/content/8/14/11, doi:10.1167/8.14.11. [PubMed] [Article] [CrossRef] [PubMed]
Ma W. J. Navalpakkam V. Beck J. M. van den Berg R. Pouget A. (2011). Behavior and neural basis of near-optimal visual search. Nature Neuroscience, 14, 783–790. [CrossRef] [PubMed]
Mack S. C. Eckstein M. P. (2011). Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. Journal of Vision, 11(9):9, [CrossRef] [PubMed]
Malcolm G. L. Henderson J. M. (2009). The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements. Journal of Vision, 9(11):8, 1–13, http://www.journalofvision.org/content/9/11/8, doi:10.1167/9.11.8. [PubMed] [Article] [CrossRef] [PubMed]
Malcolm G. L. Henderson J. M. (2010). Combining top-down processes to guide eye movements during real-world scene search. Journal of Vision, 10(2):4, 1–11, http://www.journalofvision.org/content/10/2/4, doi:10.1167/10.2.4. [PubMed] [Article] [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (1996). Priming of pop-out: II. The role of position. Perception & Psychophysics, 58, 977–991. [CrossRef] [PubMed]
Mangun G. R. Hillyard S. A. (1990). Allocation of visual attention to spatial locations: Tradeoff functions for even-related brain potentials and detection performance. Perception & Psychophysics, 47, 532–550. [CrossRef] [PubMed]
Mangun G. R. Hillyard S. A. (1991). Modulations of sensory-evoked brain potentials indicate changes in perceptual processing during visual-spatial priming. Journal of Experimental Psychology: Human Perception and Performance, 17, 1057–1074. [CrossRef] [PubMed]
Martinez A. Anllo-Vento L. Sereno M. I. Frank L. R. Buxton R. B. Dubowitz D. J. et al. (1999). Involvement of striate and extrastriate visual cortical areas in spatial attention. Nature Neuroscience, 2, 364–369. [CrossRef] [PubMed]
Maunsell J. H. R. Cook E. P. (2002). The role of attention in visual processing. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 357, 1063–1072. [CrossRef]
Mazer J. A. Gallant J. L. (2003). Goal-related activity in V4 during free viewing visual search. Evidence for a ventral stream visual salience map. Neuron, 40, 1241–1250. [CrossRef] [PubMed]
McGowan J. W. Kowler E. Sharma A. Chubb C. (1998). Saccadic localization of random dot targets. Vision Research, 38, 895–909. [CrossRef] [PubMed]
McPeek R. M. Keller E. L. (2002). Saccade target selection in the superior colliculus during a visual search task. Journal of Neurophysiology, 88, 2019–2034. [PubMed]
McPeek R. M. Keller E. L. (2004). Deficits in saccade target selection after inactivation of superior colliculus. Nature Neuroscience, 7, 757–763. [CrossRef] [PubMed]
Mennie N. Hayhoe M. Sullivan B. (2007). Look-ahead fixations: Anticipatory eye movements in natural tasks. Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale, 179, 427–442. [CrossRef]
Miller G. A. (1956). The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. [CrossRef] [PubMed]
Miller J. (1988). Components of the location probability effect in visual search tasks. Journal of Experimental Psychology: Human Perception and Performance, 14, 453–471. [CrossRef] [PubMed]
Miller M. B. Handy T. C. Cutler J. Inati S. Wolford G. L. (2001). Brain activations associated with shifts in response criterion on a recognition test. Canadian Journal of Experimental Psychology/Revue Canadienne De Psychologie Expérimentale, 55, 162–173. [CrossRef]
Milner A. D. (1997). Vision without knowledge. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 352, 1249–1256. [CrossRef]
Milstein D. M. Dorris M. C. (2007). The influence of expected value on saccadic preparation. Journal of Neuroscience, 27, 4810–4818. [CrossRef] [PubMed]
Ming-Hsuan Y. Kriegman D. J. Ahuja N. (2002). Detecting faces in images: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 34–58. [CrossRef]
Monosov I. E. Thompson K. G. (2009). Frontal eye field activity enhances object identification during covert visual search. Journal of Neurophysiology, 102, 3656–3672. [CrossRef] [PubMed]
Motter B. C. (1994). Neural correlates of feature selective memory and pop-out in extrastriate area V4. Journal of Neuroscience, 14, 2190–2199. [PubMed]
Murray R. F. (2011). Classification images: A review. Journal of Vision, 11(5):2, 1–25, http://www.journalofvision.org/content/11/5/2, doi:10.1167/11.5.2. [PubMed] [Article] [CrossRef] [PubMed]
Nagy A. L. Neriani K. E. Young T. L. (2005). Effects of target and distractor heterogeneity on search for a color target. Vision Research, 45, 1885–1899. [CrossRef] [PubMed]
Nagy A. L. Thomas G. (2003). Distractor heterogeneity, attention, and color in visual search. Vision Research, 43, 1541–1552. [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2008). Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision, 8(3):4, 1–14, http://www.journalofvision.org/content/8/3/4, doi:10.1167/8.3.4. [PubMed] [Article] [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2009). Simple summation rule for optimal fixation selection in visual search. Vision Research, 49, 1286–1294. [CrossRef] [PubMed]
Nakayama K. Martini P. (2011). Situating visual search. Vision Research, 51, 1526–1537. [CrossRef] [PubMed]
Navalpakkam V. Itti L. (2007). Search goal tunes visual features optimally. Neuron, 53, 605–617. [CrossRef] [PubMed]
Navalpakkam V. Koch C. Perona P. (2009). Homo economicus in visual search. Journal of Vision, 9(1):31, 1–16, http://www.journalofvision.org/content/9/1/31, doi:10.1167/9.1.31. [PubMed] [Article] [CrossRef] [PubMed]
Navalpakkam V. Koch C. Rangel A. Perona P. (2010). Optimal reward harvesting in complex perceptual environments. Proceedings of the National Academy of Sciences of the United States of America, 107, 5232–5237. [CrossRef] [PubMed]
Neider M. B. Zelinsky G. J. (2006). Scene context guides eye movements during visual search. Vision Research, 46, 614–621. [CrossRef] [PubMed]
Neisser U. (1964). Visual search. Scientific American, 210, 94–102. [CrossRef] [PubMed]
Oliva A. Torralba A. Castelhano M. S. Henderson J. M. (2003). Top-down control of visual attention in object detection. 2003 International Conference on Image Processing 2003. ICIP 2003. Proceedings (vol. 1, pp. I-253–I-256). Paper presented at the 2003 IEEE International Conference on Image Processing.
Olson I. R. Chun M. M. (2001). Temporal contextual cuing of visual attention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 1299–1313. [CrossRef] [PubMed]
Palmer E. M. Fencsik D. E. Flusberg S. J. Horowitz T. S. Wolfe J. M. (2011). Signal detection evidence for limited capacity in visual search. Attention, Perception & Psychophysics, 73, 2413–2424. [CrossRef] [PubMed]
Palmer J. (1994). Set-size effects in visual search: The effect of attention is independent of the stimulus for simple tasks. Vision Research, 34, 1703–1721. [CrossRef] [PubMed]
Palmer J. Ames C. T. Lindsey D. T. (1993). Measuring the effect of attention on simple visual search. Journal of Experimental Psychology: Human Perception and Performance, 19, 108–130. [CrossRef] [PubMed]
Palmer J. Verghese P. Pavel M. (2000). The psychophysics of visual search. Vision Research, 40, 1227–1268. [CrossRef] [PubMed]
Parker A. J. Newsome W. T. (1998). Sense and the single neuron: Probing the physiology of perception. Annual Review of Neuroscience, 21, 227–277. [CrossRef] [PubMed]
Parkhurst D. Law K. Niebur E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 107–123. [CrossRef] [PubMed]
Peterson M. S. Kramer A. F. (2001). Attentional guidance of the eyes by contextual information and abrupt onsets. Perception & Psychophysics, 63, 1239–1249. [CrossRef] [PubMed]
Peterson W. Birdsall T. Fox W. (1954). The theory of signal detectability. IRE Professional Group on Information Theory, 4, 171–212. [CrossRef]
Platt M. L. Glimcher P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400, 233–238. [CrossRef] [PubMed]
Põder E. (1999). Search for feature and for relative position: Measurement of capacity limitations. Vision Research, 39, 1321–1327. [CrossRef] [PubMed]
Pollmann S. Manginelli A. A. (2009). Anterior prefrontal involvement in implicit contextual change detection. Brain Research, 1263, 87–92. [CrossRef] [PubMed]
Pomplun M. (2006). Saccadic selectivity in complex visual search displays. Vision Research, 46, 1886–1900. [CrossRef] [PubMed]
Posner M. I. Snyder C. R. Davidson B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology, 109, 160–174. [CrossRef] [PubMed]
Preston A. R. Gabrieli J. D. E. (2008). Dissociation between explicit memory and configural memory in the human medial temporal lobe. Cerebral Cortex, 18, 2192–2207. [CrossRef] [PubMed]
Ptak R. (2011). The frontoparietal attention network of the human brain: Action, saliency, and a priority map of the environment. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry.
Rajashekar U. Bovik A. C. Cormack L. K. (2006). Visual search in noise: Revealing the influence of structural cues by gaze-contingent classification image analysis. Journal of Vision, 6(4):7, 379–386, http://www.journalofvision.org/content/6/4/7, doi:10.1167/6.4.7. [PubMed] [Article] [CrossRef]
Rao R. P. N. Zelinsky G. J. Hayhoe M. M. Ballard D. H. (2002). Eye movements in iconic visual search. Vision Research, 42, 1447–1463. [CrossRef] [PubMed]
Rao V. M. Levin D. C. Parker L. Cavanaugh B. Frangos A. J. Sunshine J. H. (2010). How widely is computer-aided detection used in screening and diagnostic mammography? Journal of the American College of Radiology, 7, 802–805. [CrossRef] [PubMed]
Renninger L. W. Coughlan J. Verghese P. Malik J. (2005). An information maximization model of eye movements. Advances in Neural Information Processing Systems, 17, 1121–1128. [PubMed]
Renninger L. W. Verghese P. Coughlan J. (2007). Where to look next? Eye movements reduce local uncertainty. Journal of Vision, 7(3):6, 1–17, http://www.journalofvision.org/content/7/3/6, doi:10.1167/7.3.6. [PubMed] [Article] [CrossRef] [PubMed]
Ress D. Backus B. T. Heeger D. J. (2000). Activity in primary visual cortex predicts performance in a visual detection task. Nature Neuroscience, 3, 940–945. [CrossRef] [PubMed]
Rich A. N. Kunar M. A. Van Wert M. J. Hidalgo-Sotelo B. Horowitz T. S. Wolfe J. M. (2008). Why do we miss rare targets? Exploring the boundaries of the low prevalence effect. Journal of Vision, 8(15):15, 1–17, http://www.journalofvision.org/content/8/15/15, doi:10.1167/8.15.15. [PubMed] [Article] [CrossRef] [PubMed]
Roesch M. R. Olson C. R. (2003). Impact of expected reward on neuronal activity in prefrontal cortex, frontal and supplementary eye fields and premotor cortex. Journal of Neurophysiology, 90, 1766–1789. [CrossRef] [PubMed]
Rolland J. P. Barrett H. H. (1992). Effect of random background inhomogeneity on observer detection performance. Journal of the Optical Society of America A: Optics and Image Science, 9, 649–658. [CrossRef]
Rosenholtz R. (2001). Visual search for orientation among heterogeneous distractors: Experimental results and implications for signal-detection theory models of search. Journal of Experimental Psychology: Human Perception and Performance, 27, 985–999. [CrossRef] [PubMed]
Rosenholtz R. Li Y. Nakano L. (2007). Measuring visual clutter. Journal of Vision, 7(2):17, 1–22, http://www.journalofvision.org/content/7/2/17, doi:10.1167/7.2.17. [PubMed] [Article] [CrossRef] [PubMed]
Rovamo J. Leinonen L. Laurinen P. Virsu V. (1984). Temporal integration and contrast sensitivity in foveal and peripheral vision. Perception, 13, 665–674. [CrossRef] [PubMed]
Schacter D. Tulving E. (1994). Memory systems. Cambridge, MA: MIT Press.
Schall J. D. (1995). Neural basis of saccade target selection. Reviews in the Neurosciences, 6, 63–85. [CrossRef] [PubMed]
Schall J. D. (2004). On the role of frontal eye field in guiding attention and saccades. Vision Research, 44, 1453–1467. [CrossRef] [PubMed]
Schall J. D. Hanes D. P. (1993). Neural basis of saccade target selection in frontal eye field during visual search. Nature, 366, 467–469. [CrossRef] [PubMed]
Schoonveld W. Shimozaki S. S. Eckstein M. P. (2007). Optimal observer model of single-fixation oddity search predicts a shallow set-size function. Journal of Vision, 7(10):1, 1–16, http://www.journalofvision.org/content/7/10/1, doi:10.1167/7.10.1. [PubMed] [Article] [CrossRef] [PubMed]
Schreij D. Owens C. Theeuwes J. (2008). Abrupt onsets capture attention independent of top-down control settings. Perception & Psychophysics, 70, 208–218. [CrossRef] [PubMed]
Schubö A. Akyürek E. G. Lin E.-J. Vallines I. (2011). Cortical mechanisms of visual context processing in singleton search. Neuroscience Letters, 502, 46–51. [CrossRef] [PubMed]
Schubö A. Schröger E. Meinecke C. Müller H. J. (2007). Attentional resources and pop-out detection in search displays. Neuroreport, 18, 1589–1593. [CrossRef] [PubMed]
Schubö A. Wykowska A. Müller H. J. (2007). Detecting pop-out targets in contexts of varying homogeneity: Investigating homogeneity coding with event-related brain potentials (ERPs). Brain Research, 1138, 136–147. [CrossRef] [PubMed]
Scialfa C. T. Joffe K. M. (1998). Response times and eye movements in feature and conjunction search as a function of target eccentricity. Perception & Psychophysics, 60, 1067–1082. [CrossRef] [PubMed]
Shaw M. L. (1980). Identifying attentional and decision making components in information processing. In Nickerson R. S. (Ed.), Attention and Performance VIII (pp. 277–296). Hillsdale, NJ: Erlbaum.
Shaw M. L. (1984). Division of attention among spatial locations: A fundamental difference between detection of letters and detection of luminance increments. In Bouma H. Bouwhuis D. (Eds.), Attention and Performance X (pp. 106–121). Hillsdale, NJ: Erlbaum.
Shiffrin R. M. Gardner G. T. (1972). Visual processing capacity and attentional control. Journal of Experimental Psychology, 93, 72–82. [CrossRef] [PubMed]
Shiffrin R. M. Schneider W. (1984). Automatic and controlled processing revisited. Psychological Review, 91, 269–276. [CrossRef] [PubMed]
Shimozaki S. S. Eckstein M. P. Abbey C. K. (2003a). An ideal observer with channels versus feature-independent processing of spatial frequency and orientation in visual search performance. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 20, 2197–2215. [CrossRef]
Shimozaki S. S. Eckstein M. P. Abbey C. K. (2003b). Comparison of two weighted integration models for the cueing task: Linear and likelihood. Journal of Vision, 3(3):3, 209–229, http://www.journalofvision.org/content/3/3/3, doi:10:1167/3.3.3. [PubMed] [Article] [CrossRef]
Shulman G. L. Ollinger J. M. Akbudak E. Conturo T. E. Snyder A. Z. Petersen S. E. et al. (1999). Areas involved in encoding and applying directional expectations to moving objects. Journal of Neuroscience, 19, 9480–9496. [PubMed]
Slagter H. A. Giesbrecht B. Kok A. Weissman D. H. Kenemans J. L. Woldorff M. G. et al. (2007). fMRI evidence for both generalized and specialized components of attentional control. Brain Research, 1177, 90–102. [CrossRef] [PubMed]
Smith P. L. Ratcliff R. (2009). An integrated theory of attention and decision making in visual signal detection. Psychological Review, 116, 283–317. [CrossRef] [PubMed]
Snodgrass J. G. Townsend J. T. (1980). Comparing parallel and serial models: Theory and implementation. Journal of Experimental Psychology: Human Perception and Performance, 6, 330–354. [CrossRef]
Snyder L. H. Batista A. P. Andersen R. A. (2000). Intention-related activity in the posterior parietal cortex: A review. Vision Research, 40, 1433–1441. [CrossRef] [PubMed]
Sohn J.-W. Lee D. (2006). Effects of reward expectancy on sequential eye movements in monkeys. Neural Networks: The Official Journal of the International Neural Network Society, 19, 1181–1191. [CrossRef] [PubMed]
Solomon J. A. (2002). Noise reveals visual mechanisms of detection and discrimination. Journal of Vision, 2(1):7, 105–120, http://www.journalofvision.org/content/2/1/7, doi:10:1167/2.1.7. [PubMed] [Article] [CrossRef]
Sperling G. Dosher B. A. (1986). Strategy optimization in human information processing. In Boff, K. R. Kaufman L. Thomas J. P. (Eds.), Handbook of perception and human performance: Volume 1. Sensory processes and perception. New York: John Wiley and Sons.
Srinivasan M. V. Laughlin S. B. Dubs A. (1982). Predictive coding: A fresh view of inhibition in the retina. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 216, 427–459.
Stanley R. M. II (1988). To fool a glass eye. Camouflage versus photoreconnaissance in World War II. Washington, D.C.: Smithsonian Institution Press.
Stone L. S. Krauzlis R. J. (2003). Shared motion signals for human perceptual decisions and oculomotor actions. Journal of Vision, 3(11):7, 725–736, http://www.journalofvision.org/content/3/11/7, doi:10:1167/3.11.7. [PubMed] [Article] [CrossRef]
Stritzke M. Trommershäuser J. Gegenfurtner K. R. (2009). Effects of salience and reward information during saccadic decisions under risk. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 26, B1–B13. [CrossRef]
Sugrue L. P. Corrado G. S. Newsome W. T. (2004). Matching behavior and the representation of value in the parietal cortex. Science, 304, 1782–1787. [CrossRef] [PubMed]
Swensson R. G. Judy P. F. (1981). Detection of noisy visual targets: Models for the effects of spatial uncertainty and signal-to-noise ratio. Perception & Psychophysics, 29, 521–534. [CrossRef] [PubMed]
Tanner W. P., Jr. Swets J. A. Green D. M. (1956). Some general properties of the hearing mechanism (University of Michigan: Electronic Defense Groip Technical Report).
Tatler B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14):4, 1–17, http://www.journalofvision.org/content/7/14/4, doi:10.1167/7.14.4. [PubMed] [Article] [CrossRef] [PubMed]
Tatler B. W. Hayhoe M. M. Land M. F. Ballard D. H. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11(5):5, 1–23, http://www.journalofvision.org/content/11/5/5, doi:10.1167/11.5.5. [PubMed] [Article] [CrossRef] [PubMed]
Tavassoli A. Linde I. van der Bovik A. C. Cormack L. K. (2009). Eye movements selective for spatial frequency and orientation during active visual search. Vision Research, 49, 173–181. [CrossRef] [PubMed]
Theeuwes J. (1991). Exogenous and endogenous control of attention: The effect of visual onsets and offsets. Perception & Psychophysics, 49, 83–90. [CrossRef] [PubMed]
Theeuwes J. Burger R. (1998). Attentional control during visual search: The effect of irrelevant singletons. Journal of Experimental Psychology: Human Perception and Performance, 24, 1342–1353. [CrossRef] [PubMed]
Thompson K. G. Bichot N. P. (2005). A visual salience map in the primate frontal eye field. Progress in Brain Research, 147, 251–262. [PubMed]
Thompson K. G. Bichot N. P. Schall J. D. (1997). Dissociation of visual discrimination from saccade programming in macaque frontal eye field. Journal of Neurophysiology, 77, 1046–1050. [PubMed]
Tinbergen L. (1960). The natural control of insects in pine woods, Vol. I: Factors influencing the intensity of predation by songbirds. Archives Neerlandaises de Zoologie, 13, 265–343. [CrossRef]
Tolhurst D. J. Movshon J. A. Dean A. F. (1983). The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Research, 23, 775–785. [CrossRef] [PubMed]
Torralba A. Oliva A. Castelhano M. S. Henderson J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786. [CrossRef] [PubMed]
Toth L. J. Assad J. A. (2002). Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature, 415, 165–168. [CrossRef] [PubMed]
Townsend J. T. (1972). Some results concerning the identifiability of parallel and serial processes. British Journal of Mathematical and Statistical Psychology, 25, 168–199. [CrossRef]
Treisman A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652–676. [CrossRef] [PubMed]
Treisman A. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [CrossRef] [PubMed]
Treisman A. Gormican S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review, 95, 15–47. [CrossRef] [PubMed]
Treisman A. Sato S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Perception and Performance, 16, 459–478. [CrossRef] [PubMed]
Trommershäuser J. Glimcher P. W. Gegenfurtner K. R. (2009). Visual processing, learning and feedback in the primate eye movement system. Trends in Neurosciences, 32, 583–590. [CrossRef] [PubMed]
Turk-Browne N. B. Scholl B. J. Johnson M. K. Chun M. M. (2010). Implicit perceptual anticipation triggered by statistical learning. Journal of Neuroscience, 30, 11177–11187. [CrossRef] [PubMed]
Ungerleider L. G. Mishkin M. (1982). Two cortical visual systems. Cambridge, MA: MIT Press.
Van Wert M. J. Horowitz T. S. Wolfe J. M. (2009). Even in correctable search, some types of rare targets are frequently missed. Attention, Perception & Psychophysics, 71, 541–553. [CrossRef] [PubMed]
Verghese P. (2001). Visual search and attention: A signal detection theory approach. Neuron, 31, 523–535. [CrossRef] [PubMed]
Verghese P. (2009). Contours in noise: A role for self-cuing? Journal of Vision, 9(13):2, 1–16, http://www.journalofvision.org/content/9/13/2, doi:10.1167/9.13.2. [PubMed] [Article] [CrossRef] [PubMed]
Verghese P. (2010). Search: Eye movements and mechanisms: Active search for multiple targets is inefficient [Abstract]. Journal of Vision, 10(7):1296, 1296a, http://www.journalofvision.org/content/10/7/1296, doi:10.1167/10.7.1296. [CrossRef]
Verghese P. McKee S. P. (2002). Predicting future motion. Journal of Vision, 2(5):5, 413–223, http://www.journalofvision.org/content/2/5/5, doi:10.1167/2.5.5. [PubMed] [Article] [CrossRef]
Vickery T. J. King L.-W. Jiang Y. (2005). Setting up the target template in visual search. Journal of Vision, 5(1):8, 81–92, http://www.journalofvision.org/content/5/1/8, doi:10:1167/5.1.8. [PubMed] [Article] [CrossRef]
Vincent B. T. (2011a). Search asymmetries: Parallel processing of uncertain sensory information. Vision Research, 51, 1741–1750. [CrossRef]
Vincent B. T. (2011b). Covert visual search: Prior beliefs are optimally combined with sensory evidence. Journal of Vision, 11(13):25, 1–15, http://www.journalofvision.org/content/11/13/25, doi:10.1167/11.13.25. [PubMed] [Article] [CrossRef]
Vincent B. T. Baddeley R. J. Troscianko T. Gilchrist I. D. (2009). Optimal feature integration in visual search. Journal of Vision, 9(5):15, 1–11, http://www.journalofvision.org/content/9/5/15, doi:10.1167/9.5.15. [PubMed] [Article] [CrossRef] [PubMed]
Vlaskamp B. N. S. Hooge I. T. C. (2006). Crowding degrades saccadic search performance. Vision Research, 46, 417–425. [CrossRef] [PubMed]
Vlaskamp B. N. S. Over E. A. B. Hooge I. T. C. (2005). Saccadic search performance: The effect of element spacing. Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale, 167, 246–259. [CrossRef]
Voorhis S. Hillyard S. A. (1977). Visual evoked potentials and selective attention to points in space. Perception & Psychophysics, 22, 54–62. [CrossRef]
Walthew C. Gilchrist I. D. (2006). Target location probability effects in visual search: An effect of sequential dependencies. Journal of Experimental Psychology: Human Perception and Performance, 32, 1294–1301. [CrossRef] [PubMed]
Wardak C. Olivier E. Duhamel J.-R. (2004). A deficit in covert attention after parietal cortex inactivation in the monkey. Neuron, 42, 501–508. [CrossRef] [PubMed]
Wertheim T. (1894). Über die indirekte Sehschärfe. Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 7, 172–187.
Williams L. G. (1966). Target conspicuity and visual search. Human Factors, 8, 80–92. [PubMed]
Williams L. G. (1967). The effects of target specification on objects fixated during visual search. Acta Psychologica, 27, 355–360. [CrossRef] [PubMed]
Wolfe J. M. (1994). Guided Search 2.0 A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238. [CrossRef] [PubMed]
Wolfe J. M. (1998). Visual search. Attention. London UK: University College London Press.
Wolfe J. M. (2001). Asymmetries in visual search: An introduction. Perception & Psychophysics, 63, 381–389. [CrossRef] [PubMed]
Wolfe J. M. (2007). Guided search 40: Current progress with a model of visual search. Integrated Models of Cognitive Systems, 25, 1–57.
Wolfe J. M. Cave K. R. Franzel S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433. [CrossRef] [PubMed]
Wolfe J. M. Horowitz T. S. Kenner N. M. (2005). Cognitive psychology: Rare items often missed in visual searches. Nature, 435, 439–440. [CrossRef] [PubMed]
Wolfe J. M. Horowitz T. S. Kenner N. Hyle M. Vasan N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44, 1411–1426. [CrossRef] [PubMed]
Wolfe J. M. Van Wert M. J. (2010). Varying target prevalence reveals two dissociable decision criteria in visual search. Current Biology, 20, 121–124. [CrossRef] [PubMed]
Yang G. Huang T. S. (1994). Human face detection in a complex background. Pattern Recognition, 27, 53–63. [CrossRef]
Yantis S. Jones E. (1991). Mechanisms of attentional selection: Temporally modulated priority tags. Perception & Psychophysics, 50, 166–178. [CrossRef] [PubMed]
Yantis S. Jonides J. (1990). Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human Perception and Performance, 16, 121–134. [CrossRef] [PubMed]
Yantis S. Schwarzbach J. Serences J. T. Carlson R. L. Steinmetz M. A. Pekar J. J. et al. (2002). Transient neural activity in human parietal cortex during spatial attention shifts. Nature Neuroscience, 5, 995–1002. [CrossRef] [PubMed]
Zelinsky G. J. (2008). A theory of eye movements during target acquisition. Psychological Review, 115, 787–835. [CrossRef] [PubMed]
Zelinsky G. J. Sheinberg D. L. (1997). Eye movements during parallel–serial visual search. Journal of Experimental Psychology: Human Perception and Performance, 23, 244–262. [CrossRef] [PubMed]
Zhang L. Tong M. H. Marks T. K. Shan H. Cottrell G. W. (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7):32, 1–20, http://www.journalofvision.org/content/8/7/32, doi:10.1167/8.7.32. [PubMed] [Article] [CrossRef] [PubMed]
Zhang S. Abbey C. K. Eckstein M. P. (2009). Virtual evolution for visual search in natural images results in behavioral receptive fields with inhibitory surrounds. Visual Neuroscience, 26, 93–108. [CrossRef] [PubMed]
Zhang S. Eckstein M. P. (2010). Evolution and optimality of similar neural mechanisms for perception and action during search. PLoS Computational Biology, 6, e1000930.
Zhang Y. Abbey C. K. Eckstein M. P. (2006). Adaptive detection mechanisms in globally statistically nonstationary-oriented noise. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 23, 1549–1558. [CrossRef]
Zhang Y. Pham B. Eckstein M. P. (2004). Evaluation of JPEG 2000 encoder options: Human and model observer detection of variable signals in X-ray coronary angiograms. IEEE Transactions on Medical Imaging, 23, 613–632. [CrossRef] [PubMed]
Figure 1
 
(a) Actions are anticipated by relatively easy searches. Reproduced from Hayhoe and Ballard (2005). Fixations made by an observer while making a peanut butter and jelly sandwich. Fixations are shown as yellow circles, with diameter proportional to fixation duration. Red lines indicate the saccades. Note that almost all fixations fall on task-relevant objects. (b) Photo interpreter during the 2nd World War scrutinizing an image taken by photoreconnaissance aircraft (reproduced from “To fool a Glass Eye” by Stanley, 1988). (c) Finding a prey in the Savannah, a task vital for survival for many animals. (d) Image of a portion of NASA's Stardust Collector plate scanned with an optical microscope.
Figure 1
 
(a) Actions are anticipated by relatively easy searches. Reproduced from Hayhoe and Ballard (2005). Fixations made by an observer while making a peanut butter and jelly sandwich. Fixations are shown as yellow circles, with diameter proportional to fixation duration. Red lines indicate the saccades. Note that almost all fixations fall on task-relevant objects. (b) Photo interpreter during the 2nd World War scrutinizing an image taken by photoreconnaissance aircraft (reproduced from “To fool a Glass Eye” by Stanley, 1988). (c) Finding a prey in the Savannah, a task vital for survival for many animals. (d) Image of a portion of NASA's Stardust Collector plate scanned with an optical microscope.
Figure 2
 
Effect of retinal eccentricity on (top) search accuracy, (middle) reaction time, and (bottom) number of saccades for feature (orientation, contrast) and conjunction displays (TP = target present; TA = target absent displays). Reproduced from Scialfa and Joffe (1998). Conjunction displays are those for which the target can be distinguished from the distractors only by the joint presence of two features.
Figure 2
 
Effect of retinal eccentricity on (top) search accuracy, (middle) reaction time, and (bottom) number of saccades for feature (orientation, contrast) and conjunction displays (TP = target present; TA = target absent displays). Reproduced from Scialfa and Joffe (1998). Conjunction displays are those for which the target can be distinguished from the distractors only by the joint presence of two features.
Figure 3
 
(a) Effect of target knowledge on signal detection in white noise. Index of detectability as a function of signal-to-noise ratio (increasing signal contrast) for two human observers (AB and RA) for a task in which the signal shape is known (SKE—signal known exactly) and a condition in which the signal is one out of 10 possible signals (M = 10) and not known to observers. Predictions from an ideal observer—continuous lines—are shown for comparison: M = 1 (SKE), M = 10, and M = 100 (from Burgess, 1985). (b) Effects of type of target cue (identical to target, transformed image of target, an image of another fish from the same species) on reaction time (cue advantage in ms) finding a target fish in images (Bravo & Farid, 2009). (c) Estimated templates (classification images) for saccades and perception during search for a Gaussian target in noise. Top image: Raw classification images. Bottom image: Radial profiles of estimated templates fit with Differences of Gaussians functions and overlaid with the Gaussian target's radial profile (Eckstein et al., 2007). (d) Spatial frequency and orientation context of estimated receptive fields in V4 during overt search for different targets (T1 and T2). Rightmost box shows difference between two receptive fields in terms of high-spatial frequency content (from Mazer & Gallant, 2003). (e) Posterior parietal cortex of human (left) and macaque monkey (right). (Left) The human posterior parietal cortex (PPC) is divided by the intraparietal sulcus (IPS) into the superior parietal lobe (SPL) and the inferior parietal lobe (IPL). (Right) The lunate and intraparietal sulci are opened up to show the locations of several extrastriate areas in addition to the visually responsive areas within the intraparietal sulcus. These include the parieto-occipital area (PO), the posterior intraparietal area (PIP), the medial intraparietal area (MIP), the lateral intraparietal area (LIP), the ventral intraparietal area (VIP), and the anterior intraparietal area (AIP). Figure from Bisley and Goldberg (2010). Adapted from Colby, Gattass, Olson, and Gross (1988) and Husain and Nachev (2007).
Figure 3
 
(a) Effect of target knowledge on signal detection in white noise. Index of detectability as a function of signal-to-noise ratio (increasing signal contrast) for two human observers (AB and RA) for a task in which the signal shape is known (SKE—signal known exactly) and a condition in which the signal is one out of 10 possible signals (M = 10) and not known to observers. Predictions from an ideal observer—continuous lines—are shown for comparison: M = 1 (SKE), M = 10, and M = 100 (from Burgess, 1985). (b) Effects of type of target cue (identical to target, transformed image of target, an image of another fish from the same species) on reaction time (cue advantage in ms) finding a target fish in images (Bravo & Farid, 2009). (c) Estimated templates (classification images) for saccades and perception during search for a Gaussian target in noise. Top image: Raw classification images. Bottom image: Radial profiles of estimated templates fit with Differences of Gaussians functions and overlaid with the Gaussian target's radial profile (Eckstein et al., 2007). (d) Spatial frequency and orientation context of estimated receptive fields in V4 during overt search for different targets (T1 and T2). Rightmost box shows difference between two receptive fields in terms of high-spatial frequency content (from Mazer & Gallant, 2003). (e) Posterior parietal cortex of human (left) and macaque monkey (right). (Left) The human posterior parietal cortex (PPC) is divided by the intraparietal sulcus (IPS) into the superior parietal lobe (SPL) and the inferior parietal lobe (IPL). (Right) The lunate and intraparietal sulci are opened up to show the locations of several extrastriate areas in addition to the visually responsive areas within the intraparietal sulcus. These include the parieto-occipital area (PO), the posterior intraparietal area (PIP), the medial intraparietal area (MIP), the lateral intraparietal area (LIP), the ventral intraparietal area (VIP), and the anterior intraparietal area (AIP). Figure from Bisley and Goldberg (2010). Adapted from Colby, Gattass, Olson, and Gross (1988) and Husain and Nachev (2007).
Figure 4
 
(a) Contextual cuing experiment where observers search for a T among Ls (reproduced from Chun, 2000). (b) Reaction time finding the target as a function of practice (epoch) for novel display configurations of target and distractors (green) and repeated configurations of target and distractors (Chun, 2000). (c) Scene context constrains the position of objects. Search for jeeps and helicopters (Neider & Zelinsky, 2006). (d) Object co-occurrence (chimney and house) can also guide search. If the search object appears at an unexpected location, it can have detrimental effect on search performance and often eye movements (points) are still directed to the contextual location (Eckstein, Dresher, & Shimozaki, 2006). (e) Search for people in real scenes. Bottom left image: Human fixations compared to a pure saliency model and a full model that includes contextual information about possible target locations (Torralba et al., 2006).
Figure 4
 
(a) Contextual cuing experiment where observers search for a T among Ls (reproduced from Chun, 2000). (b) Reaction time finding the target as a function of practice (epoch) for novel display configurations of target and distractors (green) and repeated configurations of target and distractors (Chun, 2000). (c) Scene context constrains the position of objects. Search for jeeps and helicopters (Neider & Zelinsky, 2006). (d) Object co-occurrence (chimney and house) can also guide search. If the search object appears at an unexpected location, it can have detrimental effect on search performance and often eye movements (points) are still directed to the contextual location (Eckstein, Dresher, & Shimozaki, 2006). (e) Search for people in real scenes. Bottom left image: Human fixations compared to a pure saliency model and a full model that includes contextual information about possible target locations (Torralba et al., 2006).
Figure 5
 
Relative frequency of saccade landings for an Ideal Searcher (a), a Maximum a posteriori probability model (b), and (c,d) two individual observers (JN and WFG) and their (e) combined data (from Najemnik & Geisler, 2005). (f) Virtual evolution of perception and saccade templates built as linear combinations of V1 cells. Figure shows three scenarios with three different targets (1st row), the initial templates of a random individual at the initial stage of the virtual evolution (2nd row), and the evolved templates (3rd row; from Zhang & Eckstein, 2010).
Figure 5
 
Relative frequency of saccade landings for an Ideal Searcher (a), a Maximum a posteriori probability model (b), and (c,d) two individual observers (JN and WFG) and their (e) combined data (from Najemnik & Geisler, 2005). (f) Virtual evolution of perception and saccade templates built as linear combinations of V1 cells. Figure shows three scenarios with three different targets (1st row), the initial templates of a random individual at the initial stage of the virtual evolution (2nd row), and the evolved templates (3rd row; from Zhang & Eckstein, 2010).
Figure 6
 
(a) Percentage of targets missed as a function of number of objects in a simulated X-ray baggage search task with varying prevalence (black: low prevalence, 1%; gray: medium prevalence 10%; white: low prevalence, 50%; Wolfe et al., 2005). (b) Effect of target prior probabilities (top) and reward (bottom) on the saccade bias of a 2-alternative forced-choice saccade to target task (from Liston & Stone, 2008). (c) Top: ERP components N2pc and SPCN as a function of reward. Bottom: Topographic plots of activity for two temporal intervals (180–230 ms and 360–500 ms) for low and high rewards (reproduced from Kiss et al., 2009).
Figure 6
 
(a) Percentage of targets missed as a function of number of objects in a simulated X-ray baggage search task with varying prevalence (black: low prevalence, 1%; gray: medium prevalence 10%; white: low prevalence, 50%; Wolfe et al., 2005). (b) Effect of target prior probabilities (top) and reward (bottom) on the saccade bias of a 2-alternative forced-choice saccade to target task (from Liston & Stone, 2008). (c) Top: ERP components N2pc and SPCN as a function of reward. Bottom: Topographic plots of activity for two temporal intervals (180–230 ms and 360–500 ms) for low and high rewards (reproduced from Kiss et al., 2009).
Figure 7
 
(a) A feature search display for which the target can be distinguished from all distractors along one physical attribute (feature). (b) Conjunction display for which the target shares attributes with each distractor and for which the target can only be distinguished from all distractors by the joint presence of two features (color and shape). (c) Reaction time vs. set-size search slopes for a variety of displays (from Wolfe, 1998). (d) Proportion correct as a function of set size for briefly presented search displays for a target known search and an oddity search for three different observers. Continuous lines are fits of an ideal single fixation observer (from Schoonveld et al., 2007).
Figure 7
 
(a) A feature search display for which the target can be distinguished from all distractors along one physical attribute (feature). (b) Conjunction display for which the target shares attributes with each distractor and for which the target can only be distinguished from all distractors by the joint presence of two features (color and shape). (c) Reaction time vs. set-size search slopes for a variety of displays (from Wolfe, 1998). (d) Proportion correct as a function of set size for briefly presented search displays for a target known search and an oddity search for three different observers. Continuous lines are fits of an ideal single fixation observer (from Schoonveld et al., 2007).
Figure 8
 
Models of eye movements. Columns 1 and 2 are for an 8 AFC target localization task in white noise with different visibility maps (column 1, steep visibility map; column 2, broader visibility map). Column 3 corresponds to a 4 AFC configuration with uncued locations (black circles) having zero probability of containing the target. Rows 2, 3, and 4 correspond to predictions of three models: Saccadic targeting (maximum a posteriori probability model, MAP; Beutter et al., 2003), Ideal Searcher (IS; Najemnik & Geisler, 2005), and Entropy Limit Minimization (ELM; Najemnik & Geisler, 2009). Location of fixations for 1st (blue) and 2nd saccades (red) for three models (MAP, IS, and ELM). The MAP model simulations include small random saccade endpoint errors to facilitate visualization of the different fixations. Central cross indicates initial fixation point for all models (reproduced from Zhang & Eckstein, 2010).
Figure 8
 
Models of eye movements. Columns 1 and 2 are for an 8 AFC target localization task in white noise with different visibility maps (column 1, steep visibility map; column 2, broader visibility map). Column 3 corresponds to a 4 AFC configuration with uncued locations (black circles) having zero probability of containing the target. Rows 2, 3, and 4 correspond to predictions of three models: Saccadic targeting (maximum a posteriori probability model, MAP; Beutter et al., 2003), Ideal Searcher (IS; Najemnik & Geisler, 2005), and Entropy Limit Minimization (ELM; Najemnik & Geisler, 2009). Location of fixations for 1st (blue) and 2nd saccades (red) for three models (MAP, IS, and ELM). The MAP model simulations include small random saccade endpoint errors to facilitate visualization of the different fixations. Central cross indicates initial fixation point for all models (reproduced from Zhang & Eckstein, 2010).
Figure 9
 
(a) Eye movements after 3 and 14 s for a radiologist scrutinizing a chest X-ray (reproduced from Kundel & Wright, 1969). (b) Expert fishermen and watchmen from the town of Pampatar in Margarita Island, Venezuela with experiences ranging from 12 to 50 years. Interview with Ramon Moncho Labori (bottom left picture) is reported in the current paper. (c) Photograph of the ocean on a clear day while fishermen capture the school of sardines. Red areas enclosed by the fishing boats signal the presence of the school of sardines. Darker areas in the bottom of the image are due to ocean topography, algae, rocks, etc. (d) On May 11, 1997, IBM's Deep Blue defeated world champion of chess Gary Kasparov.
Figure 9
 
(a) Eye movements after 3 and 14 s for a radiologist scrutinizing a chest X-ray (reproduced from Kundel & Wright, 1969). (b) Expert fishermen and watchmen from the town of Pampatar in Margarita Island, Venezuela with experiences ranging from 12 to 50 years. Interview with Ramon Moncho Labori (bottom left picture) is reported in the current paper. (c) Photograph of the ocean on a clear day while fishermen capture the school of sardines. Red areas enclosed by the fishing boats signal the presence of the school of sardines. Darker areas in the bottom of the image are due to ocean topography, algae, rocks, etc. (d) On May 11, 1997, IBM's Deep Blue defeated world champion of chess Gary Kasparov.
Supplementary PDF
Supplementary Figure
Supplementary Figure
Supplementary Figure
Supplementary Figure
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×