Free
Article  |   May 2013
Looking back at the stare-in-the-crowd effect: Staring eyes do not capture attention in visual search
Author Affiliations
  • Robbie M. Cooper
    School of Life, Sport and Social Sciences, Edinburgh Napier University, Edinburgh, UK
    R.Cooper@napier.ac.uk
  • Anna S. Law
    School of Natural Sciences and Psychology, Liverpool John Moores University, Liverpool, UK
    A.Law@ljmu.ac.uk
  • Stephen R. H. Langton
    School of Natural Sciences, University of Stirling, Stirling, UK
    srhl1@stir.ac.uk
Journal of Vision May 2013, Vol.13, 10. doi:https://doi.org/10.1167/13.6.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robbie M. Cooper, Anna S. Law, Stephen R. H. Langton; Looking back at the stare-in-the-crowd effect: Staring eyes do not capture attention in visual search. Journal of Vision 2013;13(6):10. https://doi.org/10.1167/13.6.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The stare-in-the crowd effect refers to the finding that a visual search for a target of staring eyes among averted-eyes distracters is more efficient than the search for an averted-eyes target among staring distracters. This finding could indicate that staring eyes are prioritized in the processing of the search array so that attention is more likely to be directed to their location than to any other. However, visual search is a complex process, which not only depends upon the properties of the target, but also the similarity between the target of the search and the distractor items and between the distractor items themselves. Across five experiments, we show that the search asymmetry diagnostic of the stare-in-the-crowd effect is more likely to be the result of a failure to control for the similarity among distracting items between the two critical search conditions rather than any special attention-grabbing property of staring gazes. Our results suggest that, contrary to results reported in the literature, staring gazes are not prioritized by attention in visual search.

Introduction
In recent years, the visual search paradigm has been used by researchers as a tool for investigating the ability of the human visual system to detect biologically and socially important signals in the environment. The theoretical idea motivating such studies is that millennia of encounters between our primate ancestors and threatening stimuli, such as snakes, spiders, and angry faces, have endowed us with specialized mechanisms to detect these stimuli. Accordingly, research has suggested that participants are better able to find a snake or a spider among displays containing nonthreatening stimuli, such as flowers or mushrooms, than they are to find, say, a flower among spiders and snakes (Öhman, Flykt, & Esteves, 2001), and search for an angry face is conducted more efficiently in a crowd of happy faces than vice versa (Fox et al., 2000; Horstmann, Scharlau, & Ansorge, 2006). Similarly, among social species, evolutionary pressure will favor those organisms that are able to efficiently locate conspecifics in the environment as these will afford opportunities for mating, social exchange, and so on. Here, too, research has established that faces are found in crowds of nonface stimuli, such as cars or houses, more efficiently than car or house targets can be located among crowds of face distracters (Hershler & Hochstein, 2005). 
Eye-gaze, in particular, a staring or direct gaze, is another social stimulus, which, it has been argued, may be processed by some specialized detecting mechanism (Senju & Johnson, 2009). For example, a direct gaze clearly signals the gazer's interest in the viewer, which could be indicative of a potential threat or a more positive signal of attraction or affiliation from a conspecific. From an evolutionary-theoretic view, it would therefore seem to be of great adaptive value to have evolved some kind of mechanism to prioritize the processing of direct or staring gazes and deploy processing resources to their locations. Indeed, studies that have demonstrated a so-called “stare-in-the-crowd effect”—more efficient search for a staring gaze target among averted distracters than vice versa—seem to support such a position (Conty, Tijus, Hugueville, Coelho, & George, 2006; Doi & Ueda, 2007; Doi, Ueda, & Shinohara, 2009; Palanica & Itier, 2011; Senju, Hasegawa, & Tojo, 2005). Moreover, the fact that a staring gaze in a crowd can be located relatively efficiently fits with the intuitive, but anecdotal observation that a sense that one is being watched prompts a shift in one's gaze, bringing into view someone who is indeed looking in one's direction. It is tempting to argue that this kind of search efficiency results from some kind of preattentive encoding of the staring face, which then acts to pull attention to its location. However, as Wolfe and others have argued, it is very unlikely that complex visual stimuli, such as faces and facial expressions of emotion, or stimuli carrying the higher-level categorization of “threat” can be analyzed preattentively and used to guide the deployment of attention in visual search (e.g., Cave & Batty, 2006; Wolfe & Horowitz, 2004). In this paper, we suggest that staring gazes are similarly unable to guide visual attention in this way, for results from experiments demonstrating an efficiency advantage in visual search for a direct over averted gaze are more likely to result from the perceptual properties of the displays rather than any mechanism that guides attention to the location of a direct gaze. 
In what follows, we first introduce the visual search task and briefly discuss the main classes of model that have sought to explain the experimental data. We then use this broad theoretical context in reviewing the studies that have demonstrated the stare-in-the crowd effect. We conclude that models of visual search would have to be revised in order to accommodate a high-level effect, such as an attentional priority for staring gazes; however, we argue that such a revision is not required because the stare-in-the-crowd effect is at least as readily explained by a low-level stimulus effect. The experiments we report in the remainder of the paper go on to investigate this claim. 
The visual search task
The visual search task is at the core of demonstrations of the stare-in-the-crowd effect. In this task, observers are asked to search for a target item in the presence of one or more distracting items (for a recent review of visual search see Eckstein, 2011). As has often been pointed out, this ability is nontrivial as the visual system cannot fully process all of its input at the same time. The system attempts to solve this problem by first restricting the input, directing the sensitive part of the retina toward stimuli of interest—so-called overt visual attention—but also by invoking a set of mechanisms that serve to select a subset of stimuli in the visual field for further processing. This latter set of mechanisms is collectively known as covert visual attention, and it seems to achieve its end in a number of different ways: through enhancing the signal produced by the selected stimulus, restricting processing to selected regions of space, to perceptual groups or objects, or by tuning the visual system to certain attributes possessed by the searched-for object, such as color or orientation (for reviews see Carrasco, 2011; Scholl, 2001). In visual search, the general idea is that visual attention, however it is conceived, must operate in such a way as to result in the detection and identification of the searched-for target (e.g., Treisman & Gelade, 1980; Wolfe, 1994; but see Rosenholtz, Huang, & Ehinger, 2012, for a theory that eliminates the role of covert attention in visual search). In this paper, we are concerned with how attention is deployed in order to achieve this and, in particular, whether or not staring gazes enjoy a privileged status in this regard. The precise mechanism through which selection is achieved is not our primary concern. Indeed, for the present purposes, whether selection is achieved through overt or covert attention is also of secondary importance. 
The basic manipulation in the search task is to measure the speed with which a target element is detected while the number of items in the display is varied from trial to trial. A function relating reaction time (RT) to display size can then be computed, which partitions RT into slope and intercept. The intercept is the theoretical RT that would be observed if there were no items in the display (see Dosher, 1998) and therefore represents an estimate of the time taken in performing operations that occur either before the search is initiated (e.g., parallel processing of visual features across the visual field) or after the target has been located (e.g., response selection). Any factors that influence early visual processing or later decision processes would therefore be expected to influence the intercept. 
The search process itself is gauged by the slope of the function relating RT to display size. This measures the cost of adding additional items to the display (or the time taken to evaluate each item in the display) and is usually interpreted as “search efficiency” with steeper slopes indicating a slower, less efficient search than shallower slopes. Many factors influence this search efficiency. The rate at which attention can be shifted between items or groups of items (or how long attention dwells on each item or group) will influence the slope: The longer the dwell time, the less efficient the search. However, other factors are also important. These include visual crowding (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012); knowledge about the physical characteristics of the target (Wolfe, Horowitz, Kenner, Hyle, & Vasan, 2004); and, critically in the present case, the ease with which target items can be discriminated from distracters and the similarity among distracter elements in search displays (Duncan & Humphreys, 1989). 
Models of covert visual attention must take into account the main empirical findings concerning visual search, such as those mentioned above. These models seek to explain how covert attention is deployed in visual search and in other experimental tasks. In the following section, we introduce some of these models in order to provide some context for the subsequent discussion of the stare-in-the-crowd effect. 
Models of visual search
The main models of visual search differ with respect to the question about how attention is deployed in order to detect a search target. Some see attention as being deployed in a serial manner, selecting one item at a time for the resource-demanding process of categorizing each item as belonging to or not belonging to the target category. Others suggest that all, or many, items in a display can be processed simultaneously. 
Probably the most influential contemporary serial model of visual search is Wolfe's Guided Search (e.g., Wolfe, 1994, 2007), which evolved from Treisman's Feature Integration Theory (FIT) (Treisman & Gelade, 1980). The general idea in both FIT and Guided Search is that attention is controlled by a kind of salience map, which is essentially a representation of locations in the visual field that are likely to contain a target. In Guided Search, both bottom-up and top-down information contribute to this guiding representation, which is called the “activation map.” Bottom-up information consists of how much an item differs from its neighbors along a limited set of dimensions encoded in parallel across the visual field (e.g., color, orientation, and motion; for a review, see Wolfe & Horowitz, 2004). Top-down information corresponds to a weighting placed on the outputs of broadly tuned feature channels (e.g., for color, these might be channels for “red,” “yellow,” and “green”; for orientation, “steep,” shallow,” “left,” and “right”), depending on the nature of the search task. Top-down and bottom-up activation is combined together for each item in the display, and attention is allocated to the item in the activation map with the highest activation, and, at least in earlier versions of Guided Search (Wolfe, 1994), if this is not the target, attention is then deployed to the item with the next highest activation and so on until the target is found or the search is abandoned. 
Guided Search and FIT belong to a class of models that view search as essentially requiring the serial deployment of processing resources to display items. However, it has been recognized for some time that results from experiments that seem to demand a serial search explanation can also be explained by a number of parallel models in which all, or many, items in a display are processed at once (e.g., Snodgrass & Townsend, 1980; Townsend, 1990). In general, these parallel models can be thought of as having either limited or unlimited capacity. Limited-capacity parallel models assume that all items are processed simultaneously at a rate determined by the strength of sensory evidence that each item matches a category representation; however, there is also a fixed amount of “resource” available for this processing, which also determines the rate at which items are classified as belonging to the target or nontarget categories; the more items in the display, the lower amount of resource available for each comparison and the less efficient the search. An important point is that the allocation of the resource is partly under the participant's control. Bundesen's Theory of Visual Attention (TVA) (Bundesen, 1990) is an example of this kind of model. In TVA, the rate of classification of individual items is partly determined by an attentional weighting (or pertinence) placed on relevant perceptual features (e.g., “red”). The effect of this is to speed up the rate of processing of individual items in the display that possess those features at the expense of other items that do not. However, this kind of feature-based selection will accelerate all possible categorizations of the selected elements (shape, orientation, location, etc.). A second selection mechanism, perceptual bias, therefore operates to bias the particular categorization of display elements in favor of whatever category is most relevant for the current task (e.g., bias the letter A shape categorization to find a red letter A among red Xs). Visual search is more or less successful, depending upon the speed at which the element in the display designated as the target reaches some threshold for categorization. 
A second class of parallel models have unlimited processing capacity and came about through the application of Signal Detection Theory to visual search (e.g., Palmer, Ames, & Lindsey, 1993; Palmer, Verghese, & Pavel, 2000; Verghese, 2001). Once again, processing of all display items begins simultaneously, but unlike in the limited-capacity models, the rate of processing of individual items is unaffected by the size of the display. Instead, performance is limited by internally generated noise. The idea is that display elements each give rise to some internal response from a notional detector matched to the properties of the target category. However, because the visual system is inherently noisy, the same element will give rise to slightly different response values on repeated presentations. In this way, target and nontarget items are each associated with a distribution of response values centered on different mean values, but these distributions may overlap to a certain extent. In visual search, it is assumed that there is a processor available for each display element and that each simultaneously generates a noisy response. In order to decide if the target is present, an observer could simply monitor the largest response across all elements and, if this exceeds some threshold, respond positively; if the threshold value is not exceeded, respond negatively. Increasing the number of distracters will increase the probability that any one of them will generate an internal response that is large enough to be mistaken as arising from the distribution of target responses. Performance therefore degrades with increasing display size without any corresponding degradation in the processing of individual display elements. Although originally based on measures of performance accuracy in briefly presented search displays, this kind of theoretical approach has also been extended to incorporate methods using response time measures (e.g., Dosher, Han, & Lu, 2004; McElree & Carrasco, 1999). 
The stare-in-the-crowd effect
The stare-in-the-crowd effect actually seems to be based on two separate empirical findings, which are sometimes conflated. The first is a search efficiency effect: Slopes are flatter when searching for direct-gaze targets among averted-gaze distracters than in searches for averted-gaze targets among staring-gaze distracters. The second type of finding is an overall latency effect: Mean RTs to displays containing a staring-gaze target tend to be faster than to arrays containing an averted-gaze target, regardless of display size. Von Grünau and Anston (1995) were the first to document these effects. In their experiments, participants were asked to search for one of two schematic targets: staring eyes (iris and pupil having a central position in the socket) or averted eyes (iris and pupil positioned to the left or right side of the socket). Search for the staring-eyes target was conducted through both types of averted-gaze distracter (left and right). In contrast, search for the averted-eyes target was conducted through staring-eyes distracters and averted eyes gazing in the opposite direction to the target (i.e., a left-looking distracter if the target was looking rightward; see Figure 1). Von Grünau and Anston found a search asymmetry such that the inclusion of additional distracter items had a smaller effect on search latency for the direct-gaze target than it did for the averted-gaze target. In other words, search for the direct-gaze target was more efficient than search for the averted-gaze target. Additionally, they found that staring gazes were detected faster, on average, than averted gazes. This pattern of results was replicated more recently by Senju et al. (2005) who replaced von Grünau and Anston's schematic eye stimuli with photographs of full faces in three-quarters view. This seemed to confirm that the effect is based on the detection of staring gazes per se, rather than an advantage for gazes in which the iris and pupil are centered in their sockets. 
Figure 1
 
Example stimuli from von Grünau and Anston (1995) with (left panel) a staring-eyes target among leftward and rightward averted-eyes distracters and (right panel) a rightward averted-eyes target among staring and leftward averted-eyes distracters (figure reproduced with permission from Pion Ltd., London, www.pion.co.uk).
Figure 1
 
Example stimuli from von Grünau and Anston (1995) with (left panel) a staring-eyes target among leftward and rightward averted-eyes distracters and (right panel) a rightward averted-eyes target among staring and leftward averted-eyes distracters (figure reproduced with permission from Pion Ltd., London, www.pion.co.uk).
Of the other papers that are often cited as demonstrating a stare-in-the-crowd effect, none have actually replicated this search efficiency advantage. Palanica and Itier (2011) and Doi and colleagues (Doi & Ueda, 2007; Doi et al., 2009) report the overall RT effect in their experiments, but as none of the experiments reported in these papers varied the number of items in the search displays, it is unclear whether the search for direct-gaze targets was performed more efficiently than the search for averted gazes. As mentioned in the section on visual search, an overall RT effect could equally well arise from processes that occur either before the search has been initiated or after the search has terminated, in other words, processes that are independent from attentional deployment. Conty et al. (2006) also found the overall RT effect, but despite manipulating display size, they failed to find an interaction of this variable with the type of target (staring vs. averted gaze). The implication here is that search for averted gazes was just as efficient as search for staring gazes; attention is not preferentially deployed to staring gazes. Again, the RT effect could well be the result of faster response selection after attention has been deployed to a staring versus an averted-gaze target. 
The actual search efficiency advantage has therefore only been clearly demonstrated in two published papers. Nevertheless, these two studies do suggest that search for a staring-gaze target among averted-gaze distracters is more efficient than vice versa. The implication favored by the authors of these studies is that staring gazes are somehow special with regard to their ability to attract processing resources. However, this conclusion can only be explained by the models of visual search discussed in the previous section with some additional assumptions. In a serial model, such as Guided Search, gaze direction would have to form an attribute that can be abstracted from early vision, which could then be used to guide visual attention. As already noted, however, Wolfe and Horowitz (2004) place faces on a list of attributes that are unlikely to be capable of guiding attention for reasons that include the fact that facial attributes rarely seem to generate efficient searches. Another possibility is that a lower-level visual correlate of gaze direction could serve as a guiding attribute. One candidate is the relative contrast in luminance between scleral regions on either side of the iris as this information can be made available in early vision (Ando, 2004; Langton, Watt, & Bruce, 2000). 
A relatively efficient search for staring gazes could be achieved in a limited-capacity parallel theory, such as Bundesen's (1990) TVA, if staring gazes tend to generate a stronger level of match to some direct-gaze target template than the level of match of averted gazes to an averted-gaze template. Alternatively, or additionally, it may be that there is a permanently higher pertinence value (i.e., attentional weighting) for the configuration of eye features corresponding to a direct gaze. Both assumptions would increase the rate of processing of staring gazes relative to averted gazes, which would yield more efficient search for the former relative to the latter. Indeed, the general finding of an overall RT effect for responses to direct gazes (Conty et al., 2006; Doi & Ueda, 2007; Doi et al., 2009; Palanica & Itier, 2011) does lend support to the idea that the rate of processing of staring gazes is superior to that of averted gazes. 
Finally, in unlimited-capacity theories based on Signal Detection Theory, search asymmetries arise when the degrees of uncertainty (i.e., noise) associated with two stimuli are different (Palmer et al., 2000; Verghese, 2001). So, if there is more uncertainty associated with staring gazes than with averted gazes, there will be a greater overlap of target-present and target-absent internal response distributions when staring gazes are distracters than when averted gazes are distracters. The result of this is that averted-gaze targets will be less discriminable from staring-gaze distracters than vice versa. However, a problem with this solution is that it is far from clear that staring gazes will be associated with more internal noise than averted gazes. Indeed, on the basis of the classical studies on gaze perception, the opposite seems more likely (Anstis, Mayhew, & Morley, 1969; Cline, 1967). 
In summary, then, the empirical finding of a search advantage for staring gazes can really only be explained by existing models of visual search with the introduction of some additional assumptions, not all of which are warranted by existing data. An alternative possibility, which is much more easily accommodated by these models, is that the search asymmetries reported in the literature are actually the result of the particular choices of targets and distracters used in these experiments rather than any attention-grabbing property of staring gazes. 
In order to make this point clear, consider a potential problem with the experiments reported by von Grünau and Anston (1995), Senju et al. (2005), and Conty et al. (2006): The main comparisons between experimental conditions in which participants search for, respectively, staring and averted gazes are confounded with manipulations of the distracting items in these search conditions. For example, when told that a face with a staring gaze was the target, Senju et al.'s participants were asked to search through a set of distracters, some of which were faces with downward gazes and some with laterally averted gazes. When searching for a laterally averted–gaze target, some of the distracting faces had staring gazes, and some had downward gazes. The difference in search efficiency between the two conditions could therefore be due to the fact that the identity of the targets differed in the two conditions, but it could also be caused by the fact that the composition of the distracter sets was different between search conditions. 
The first of these possibilities is clearly the one favored by the authors, who would like to attribute the effect to a prioritizing of staring-gaze targets by attention. However, the literature on visual search suggests that the alternatives are equally plausible. First, as noted in a previous section, visual search depends, in part, upon the rate at which attention can move through the distracters, and research has suggested that attention is less readily disengaged from staring gazes than from averted gazes (Senju & Hasegawa, 2005). One implication of this result is that an array of stimuli containing staring eyes as distracters would be more difficult to search through than a similar array in which staring eyes are replaced by averted eyes; if attention dwells for a fraction of a second longer on each distracter item with a staring gaze than on each distracter with an averted gaze, then search slopes will be steeper for arrays containing staring-gaze distracters than for arrays in which the staring gazes are replaced by averted gazes. This is exactly the problem in the experiments conducted by von Grünau and Anston (1995) and Senju et al. (2005): Search for averted-gaze targets might be rendered less efficient than search for staring-gaze targets because the former, but not the latter, required participants to search through and reject distracter faces with staring gazes. 
The second alternative to the attention-grabbing explanation for the stare-in-the-crowd effect observed by Senju et al. (2005) and von Grünau and Anston (1995) is that the asymmetry arises because the visual similarity among the distracting items was not controlled between the two search conditions (staring-gaze targets vs. averted-gaze targets). In the section on visual search, we described how, generally speaking, search efficiency is a function of the similarity between targets and distracters and the similarity between the distracting items (Duncan & Humphreys, 1989); search becomes more efficient when targets are visually dissimilar to distracters and/or when distracters are more similar to one another. Moreover, these results are readily explained by the models of visual attention introduced in the previous section. In Senju et al.'s experiments, flatter search slopes for staring-gaze targets than for averted-gaze targets could be the result of the distracters in the former (laterally averted and downward gazes) being more similar to one another than the distracters in the latter (staring and downward gazes). 
In summary, the available evidence on the stare-in-the-crowd effect does not allow us to distinguish between three interpretations of the search asymmetry: (a) that it is due to relatively efficient search for the staring-eyes target because these stimuli are prioritized by attention; (b) that it is due to relatively inefficient search through arrays including staring-eyes distracters because, once attended, staring gazes retain attention for longer than averted gazes; or (c) that it arises because distracters in a staring-gaze search were more homogeneous than those in averted-gaze searches. The present set of experiments was designed to resolve this issue. Experiment 1 replicated Senju et al.'s (2005) experiment in order to confirm the existence of the search asymmetry. Experiments 2 and 3 tested whether this asymmetry is still obtained when distracter sets are held constant across searches for staring and averted gazes. Finally, Experiments 4 and 5 tested the hypothesis that search efficiency is compromised when staring gazes are present among the distracting items. 
Experiment 1: Replication of Senju, Hasegawa, and Tojo (2005)
Method
Participants
The participants were eight undergraduates and postgraduates (six female) from the University of Stirling. The undergraduates received course credit, and the postgraduates were unpaid volunteers. 
Materials and apparatus
The experiment was conducted on a Viglen Contender P3 1Ghz personal computer with a 16-inch monitor. E-prime experiment generator software was used to present the stimuli and measure the reaction time and accuracy of the participants' responses, which were entered by means of a button box. Viewing distance was 70 cm, and a chin rest was used throughout. 
The stimuli were all produced from the same three-quarter greyscale image of a male face, which was cropped into an oval shape. The eye regions from images of the same person looking in different directions were then superimposed in order to create three images that were exactly the same except for the eyes (downward, averted left, or staring). These images were then mirror-reversed to produce additional stimuli in which the entire face was oriented in the opposite direction. The stimuli were prepared in Adobe Photoshop 7.0. 
In the visual search arrays, either four or eight faces oriented in the same direction were presented in a circular arrangement (see Figure 2). The diameter of the circle subtended 12.9° of the visual angle, and the individual faces subtended 3.7° × 2.5°, display sizes that were very close to those used by Senju et al. (2005). In the target-present arrays, the target appeared equally often at every possible location around the circle. For the other places in the circle, E-prime selected a distracter at random from the two possible distracter types. Therefore, if the target was a staring gaze with the face oriented to the left, the distracters were also left-oriented faces with a downward or averted gaze in a random arrangement. In arrays of size four, targets were also equally likely to appear at each of the eight locations; however, items in these arrays were always located with one space between them (i.e., moving clockwise from the topmost position in Figure 2 at locations 1, 3, 5, and 7 or locations 2, 4, 6, and 8). 
Figure 2
 
Example stimulus array from Experiment 1 with (left panel) a staring-eyes target and (right panel) an averted-eyes target.
Figure 2
 
Example stimulus array from Experiment 1 with (left panel) a staring-eyes target and (right panel) an averted-eyes target.
Design and procedure
The experimental design consisted of three within-subjects factors: target eyes direction (staring or averted), target status (present or absent), and array size (four or eight items). In half of the trials, the target and distracters in the array were left-oriented faces, and in the other half, they were right-oriented faces. There were therefore four possible types of target: staring eyes, left-oriented face; staring eyes, right-oriented face; averted eyes, left-oriented face; and averted eyes, right-oriented face. There was a separate block of trials for each of these, and the blocks were presented in a randomly determined order for each participant. Each block was preceded by eight practice trials, followed by 32 test trials presented in a random order. Half of these were target present, and half were target absent, and there were also equal numbers of each array size. 
Each trial began with a fixation cross, which appeared on screen for 500 ms, followed by the visual search array that remained on screen until the participant's response. Participants were instructed to respond as quickly and accurately as possible using their preferred hand by pressing one button on the button box if the target was present and another button if it was absent. They were shown an example of the target for each block before being given the practice trials. As in Senju et al.'s (2005) experiment, feedback was provided (“Good!” for a correct response and “–” for an incorrect one). Feedback remained on screen for 500 ms and was followed by an interval of 1000 ms before the next trial began. Participants' eye movements were not monitored in this or in any of the other experiments reported in this paper, and they were free to move their eyes during trials. 
Results and discussion
The data from two participants had to be excluded from the analysis because they performed at or below chance (50%) in at least one condition of the experiment. For each of the remaining participants, the median correct reaction times were obtained for each condition, and the interparticipant means of these medians are shown in Figure 3 along with the proportion of errors. 
Figure 3
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 1. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars represent the standard error of the mean.
Figure 3
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 1. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars represent the standard error of the mean.
For trials in which the target was present, search times for both averted and staring-eyes targets were slower in the larger, compared with the smaller, arrays as is usual in visual search studies. However, increasing the array size had a larger effect on search latency for averted eyes than for staring-eyes targets. Furthermore, overall RTs were faster for staring-gaze searches than for averted-gaze searches. A similar pattern of results was evident for searches in which the target was absent although increasing the array size produced a larger slowing of search times than it did in target-present arrays. Again, this is a usual finding in visual search studies and suggests a more exhaustive search of the items when the target is not present in the array. 
These observations were supported by the results of a 2 (target status) × 2 (target gaze direction) × 2 (array size) within-subjects analysis of variance (ANOVA) conducted on the median RTs. This analysis yielded a significant main effect of target status, F(1, 5) = 61.17, MSE = 40,985.40, p < 0.01 with faster RTs to respond on target-present trials (M = 1743 ms) than on target-absent trials (M = 2200 ms); a significant main effect of target gaze direction, F(1, 5) = 71.83, MSE = 160,703.33, p < 0.01, with faster responses when searching for staring eyes (M = 1481 ms) than averted eyes (M = 2462 ms); and a significant main effect of array size, F(1, 5) = 78.51, MSE = 96,238.56, p < 0.01, confirming that search was conducted faster through smaller (M = 1575 ms) than through larger arrays (M = 2369 ms). The analysis also yielded significant interactions between target status and array size, F(1, 5) = 10.65, MSE = 71,241.53, p < 0.05, and, critically, between target gaze direction and array size, F(1, 5) = 11.63, MSE = 20,830.03, p < 0.05. Simple main effects analysis of the latter interaction confirmed that increasing the size of the arrays from four to eight items produced a significant slowing of RTs for averted eyes and for staring eyes although the increase in RT was larger for averted eyes (1007 ms) than for searches for staring eyes (706 ms). In other words, search for staring-gaze targets was more efficient than search for averted gazes. This analysis also confirmed that search time was significantly faster when staring eyes were the targets than when averted eyes were the targets whether these targets were present in the arrays or not. 
An examination of the error data in Figure 3 shows that, in general, participants made more errors with averted-eyes targets (13.28%) than with staring-eyes targets (2.34%) and when the arrays were larger (9.12%) rather than smaller (6.51%). The error scores therefore behave in a similar way to reaction times, providing no evidence of a speed-accuracy trade-off. The task of finding staring-eyes targets can be completed more quickly and with fewer errors than the task of finding averted-gaze targets. There was, however, a greater tendency for participants to miss a target (11.46%) than to make a false positive response (4.17%). This was also the case in Senju et al.'s (2005) data and is a common finding in visual search. 
In summary, these data match very closely those obtained by both von Grünau and Anston (1995) and Senju et al. (2005). This suggests that the search asymmetry in favor of the staring-eyes target is robust. However, the key question is what is driving this effect. One view is that attention somehow prioritizes staring gazes, perhaps because they are associated with an imminent threat or are a sign of affiliation or attraction. On the other hand, search for a staring gaze may be more efficient because it is easier to search through the averted-gaze distracters that accompany staring-gaze targets than it is to search through a distracter set containing staring gazes when looking for targets with an averted gaze. This might be because attention dwells on staring eyes for longer than it does on averted eyes (e.g., because it is harder to disengage from the former than the latter). A third possibility is that there is greater similarity among distracters in arrays with staring-eyes targets than in arrays containing averted-gaze targets. A consequence of this would be that staring gazes will be more salient in their arrays than averted gaze targets in their arrays (Duncan & Humphreys, 1989). In this case, search for staring eyes will be more efficient, not because of their significant meaning to the observer, but because attention tends to be preferentially deployed toward visually salient items in visual search. 
The remaining experiments reported in this paper were designed to tease apart these possible explanations. Experiment 2 tests whether the effect is likely to be caused by an attentional bias to staring eyes as opposed to a dwell-time effect or some other property of the distracting items. This was achieved by removing staring eyes from the distracter sets and by ensuring that the elements in the distracting arrays were the same across conditions when participants searched for staring-gaze and averted-gaze targets. 
Experiment 2: Equated heterogeneous distracters
As in Experiment 1, participants were asked to search for staring or averted-eyes targets; however, in this experiment, the distracters through which the search had to be conducted were composed of the same items in each of the search conditions: faces gazing upward and downward. If the search asymmetry obtained in Experiment 1 was due to attention prioritizing the staring-eyes target, a similar pattern of data ought to be obtained in this experiment. However, if the asymmetry in Experiment 1 was caused by a relatively greater difficulty in searching through distracter sets that contained faces with staring gazes than sets entirely composed of faces with averted gazes (e.g., because attention dwells for longer on staring-gaze distracters), then the asymmetry should be eliminated when the distracter sets are matched. Moreover, because distracter-distracter similarity is identical across the two search conditions, any difference in search efficiency cannot be attributed to this factor. 
Method
Participants
The participants were 12 undergraduates (11 female) at the University of Stirling who participated for course credit. 
Materials, design, and procedure
The computer hardware and software were exactly the same as in Experiment 1. The stimuli were also the same as those used in Experiment 1 with the addition of an upward-eyes distracter. This was created using the same basic face image with an upward-gaze eye region superimposed. The gaze directions were therefore direct, averted, downward, or upward with the face in both left and right orientations. 
Visual search arrays were constructed in the same manner and were the same size as in Experiment 1. The only difference was that downward- and upward-gazing faces were always the two distracter types in the arrays, which could have either a staring-gaze or an averted-gaze target. Examples of these arrays are shown in Figure 4. The design and procedure were identical to those of Experiment 1
Figure 4
 
Example stimulus array from Experiment 2 with (left panel) a staring-eyes target and (right panel) an averted-eyes target, both with upward-gaze and downward-gaze distracters.
Figure 4
 
Example stimulus array from Experiment 2 with (left panel) a staring-eyes target and (right panel) an averted-eyes target, both with upward-gaze and downward-gaze distracters.
Results
The data from one participant was excluded from all analyses because she performed at or below chance level (50%) in three conditions of the experiment. The mean error rate in each condition of the experiment (across the other 11 participants) is shown in Figure 5 along with mean RTs calculated using the median performance of individual participants. It is clear that, at least in the target-present trials, search is now equally efficient for staring and averted-eyes targets. For target-absent trials, on the other hand, search is more efficient for staring as opposed to averted-gaze targets. 
Figure 5
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 2. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 5
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 2. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
A 2 (target status) × 2 (target gaze direction) × 2 (array size) within-subjects ANOVA conducted on the median RT data supported these observations. This analysis revealed significant main effects of target gaze direction, F(1, 10) = 31.66, MSE = 83,334.093, p < .001; target status, F(1, 10) = 124.21, MSE = 35,455.10, p < .001; and array size, F(1, 10) = 194.98, MSE = 58,549.99, p < .001. There were also significant interactions between target gaze direction and target status, F(1, 10) = 7.41, MSE = 17,029.74, p < .05; target gaze direction and array size, F(1, 10) = 10.75, MSE = 18,191.23, p < .01; and between target status and array size, F(1, 10) = 69.85, MSE = 22,579.54, p < .001. However, these main effects and interactions were qualified by a significant three-way interaction, F(1, 10) = 12.70, MSE = 16,561.01, p < .01. In order to investigate this interaction, separate ANOVAs were conducted on target-present and target-absent RT data, each with target gaze direction and array size as within-subjects factors. 
For target-present data, the analysis yielded a main effect of target gaze direction with faster overall RTs for staring gaze targets (M = 1422 ms) than for averted gaze targets (M = 1692 ms), F(1, 10) = 10.16, MSE = 79,253.66, p < .05, and a main effect of array size with faster search times for the smaller (M = 1331 ms) than for the larger arrays (M = 1783 ms), F(1, 10) = 69.81, MSE = 32,280.26, p < .001. The interaction between target gaze direction and array size did not approach significance, F(1, 10) = 0.01, MSE = 13,707.44, p = 0.92. 
For target-absent trials, there was a significant main effect of target gaze direction, F(1, 10) = 92.81, MSE = 21,110.17, p < .001, as participants were faster to respond when searching for staring eyes (M = 1793 ms) than averted eyes (M = 2215 ms), and a significant main effect of array size, F(1, 10) = 219.86, MSE = 48,849.28, p < .001, with faster searches through smaller (M = 1510 ms) compared with larger arrays (M = 2498 ms). However, there was also a significant interaction between these factors, F(1, 10) = 19.28, MSE = 21,044.79, p < .01. This result supports the observation made previously that a search for staring eyes in arrays where no targets were present is more efficient (i.e., less affected by increasing the array size) than a search for averted eyes. Simple main effects analyses confirmed that the effects of array size for both staring- and averted-gaze searches were significant although the magnitude of the effect was smaller when participants were searching for staring eyes (796 ms) as opposed to averted-eyes targets (1180 ms). 
Participants made incorrect responses on 6.90% of trials. The error data were inspected for any sign of a speed-accuracy trade-off, but again the error rates tended to behave in a similar way to reaction times with participants making more errors in conditions in which they also performed more slowly (i.e., with averted-eyes targets and larger arrays). As in Experiment 1 (and Senju et al., 2005), there was a tendency toward more misses (10.71%) than false positives (4.40%). 
Discussion
The key finding in this experiment is that when the composition of the distracter sets was controlled between searches for staring and averted gazes, the search asymmetry favoring the efficient detection of the staring-eyes target that had been observed in Experiment 1 was no longer present. This suggests that the stare-in-the-crowd effect is a function of the distracting items used in the two kinds of searches rather than any special attention-grabbing property of staring-gaze targets. 
However, while the search was equally efficient for staring and averted gazes when targets were present in the arrays, an asymmetry persisted in target-absent arrays when the search for averted gazes was slowed much more by increasing the array size than was the search for staring gazes. This might seem to be a puzzling finding because in target-absent conditions the arrays were comprised of identical elements for both kinds of searches. Why should participants take longer to search through essentially identical arrays when looking for an averted-gaze target than they do when looking for a staring-gaze target? And why does this asymmetry only appear in target-absent searches? 
Horstmann et al. (2006) argue that the relative efficiency of searches in target-absent trials should be used as a diagnostic of the similarity between targets and distracters in searches for different targets through the same sets of distracters. For example, when participants are unable to locate a target speedily, as they would do on target-present trials, attention may be more likely to revisit the locations of items that seem to resemble the target of the search than items that are more dissimilar to the target. According to this view, then, the asymmetry in the efficiency of target-absent searches found in Experiment 2 is the result of a greater degree of similarity between averted-eyes targets and distracters than between the staring-eyes targets and distracters. The key question, however, is whether this undermines the finding that in target-present conditions search was equally efficient for staring-eyes targets as for averted-eyes targets. It seems that a higher target-distracter similarity in searches for averted-eyes targets than for staring-eyes targets ought to work in favor of more efficient search for the latter than the former in target-present as well as target-absent conditions, and yet no such difference was found. Nevertheless, in order to rule out the possibility that a mismatch in target-distracter similarity across searches for staring and averted-eyes targets somehow masks what would otherwise be a more efficient search for staring eyes (i.e., the stare-in-the-crowd effect), we conducted an additional experiment in which we attempted to control the degree of visual similarity between both kinds of targets and the distracters. 
Experiment 3: Equated homogenous distracters
In Experiment 3, participants once again searched for either staring or averted-eyes targets in separate blocks of trials; however, rather than searching through an identical yet heterogeneous set of distracters in the two target conditions, participants searched for direct- or averted-gaze targets through sets of closed-eye distracters. In addition, the greyscale images used in Experiments 1 and 2 were replaced by thresholded images, which allowed us to manipulate the gaze direction of the averted-eyes targets so that they were slightly more extreme than in previous experiments. These manipulations were made in order to better equate the visual target-distracter similarity for staring-eyes and averted-eyes targets. If staring eyes are afforded some privileged status by attention because of the significance of this stimulus to the observer, then search for this type of gaze ought to be more efficient (i.e., less affected by increasing the array size) than search for averted eyes. Furthermore, having controlled target-distracter similarity and distracter-distracter similarity across the two target conditions, we would be confident that any such asymmetry is the result of the “meaning” of a direct gaze rather than any lower-level visual factor. 
Method
Participants
Twenty-nine undergraduate students from the University of Stirling served as participants in this experiment. 
Materials, design, and procedure
The greyscale staring eyes images that were used in Experiments 1 and 2 were subjected to a threshold transformation using Adobe Photoshop. The averted-eyes targets were created in Photoshop by cutting and pasting the pupils in the eyes of the staring-eyes images and moving them laterally within the eye in the same direction in which the head was rotated. In this way, we created images in which the gazer was either looking straight toward the viewer or had his gaze averted to either side, both with respect to the viewer and with respect to a frame of reference provided by the gazer's head angle. This is in contrast to the averted-eyes images used in Experiment 1 in which gaze was averted with respect to the viewer but was actually straight ahead from the point of view of the gazer. The distracting items comprised images of the same individual with closed eyes. The original greyscale versions of these images were created at the same time and under the same lighting conditions as the staring and averted-eyes images used as stimuli in Experiments 1 and 2. These closed-eye images were subjected to the same threshold transformation as was used with the target images. As with the target images, two versions of the closed-eye images were created by mirror-reversal: one with the face oriented to the left and one with the face oriented to the right. The thresholded images comprised the visual search arrays, which were identical in size and constructed in the same way as those in Experiments 1 and 2 with the exception that the closed-eye faces were used as the distracting items (see Figure 6). In all other respects, the design and procedure were identical to those of Experiments 1 and 2
Figure 6
 
Example stimulus arrays from Experiment 3 with (left panel) a staring-gaze target and (right panel) an averted-gaze target.
Figure 6
 
Example stimulus arrays from Experiment 3 with (left panel) a staring-gaze target and (right panel) an averted-gaze target.
Results
Median correct RTs in each condition of the experiment were calculated for each participant, and the means of these medians are presented in Figure 7 along with the proportion of errors made in each condition. Inspection of this figure reveals that for both target-present and target-absent trials, searches for staring-eyes and averted-eyes targets were equally efficient. 
Figure 7
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 3. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 7
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 3. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
The median RT data were entered into a within-subjects ANOVA with target status, target gaze direction, and array size as factors. The results of this analysis provide support for the above observations. There was a significant main effect of target gaze direction, F(1, 28) = 56.48, MSE = 17,289.12, p < .001, with faster RTs in blocks of trials in which staring gazes were targets (M = 693 ms) than those in which averted gazes were targets (M = 725 ms). The analysis also yielded a significant effect of array size, F(1, 28) = 52.89, MSE = 9,352.00, p < .001, with slower RTs for search on larger (M = 755 ms) versus smaller arrays (M = 663 ms) and a significant main effect of target status, F(1, 28) = 56.48, MSE = 17,289.12, p < .001, as participants were faster to respond on target-present trials (M = 644 ms) than on target-absent trials (M = 774 ms). The effects of target status and array size were qualified by an interaction between these factors, F(1, 28) = 20.97, MSE = 5,285.90, p < .001. As is clear from an inspection of Figure 7, the source of this interaction is the larger effect of array size on target-absent trials than on target-present trials, which is consistent with a serial self-terminating search. Critically, the interaction between target gaze direction and array size did not approach statistical significance (p = 0.92), and neither did any of the other interactions (ps > 0.73). 
Participants made errors on just 4.75% of trials. As in Experiments 1 and 2, errors were slightly more likely to be misses on target-present trials (6.25%) than false alarms on target-absent trials (3.25%). Because error rates were low and there was no obvious evidence of a speed-accuracy trade-off, no further analyses were conducted on these data. 
Discussion
The results of this experiment provide no evidence in support of the suggestion that staring eyes somehow capture visual attention and therefore that faces with this type of gaze are located in a crowd more efficiently than are faces with an averted gaze. In Experiment 3 searches for faces with staring eyes and averted eyes were conducted through a homogeneous set of distracting faces, each of which had its eyes closed. Under these conditions, visual search performance was found to be equally efficient for the two types of searches; that is, search speed was slowed by increasing the size of the search arrays by an equivalent magnitude whether participants were searching for faces with staring or averted eyes. Moreover, this symmetry in search efficiency was observed for target-absent as well as target-present conditions, suggesting that the experiment was successful in equating target-distracter similarity across the two types of searches. It is therefore unlikely that some low-level perceptual property that differed systematically between the two types of searches could be masking a genuine search advantage for faces with staring eyes, which was a possible interpretation of the results of Experiment 3
It is also worth noting that searches for both kinds of targets were performed much more efficiently in this experiment compared with performance in the other visual search tasks of Experiments 1 and 2. In Experiment 2, for example, when participants searched for faces with staring and averted-gaze targets among heterogeneous distracter arrays, the mean search slope was 114 ms/item for staring-eyes targets and 112 ms/item for faces with averted eyes; the equivalent search slopes in Experiment 3 were, respectively, 12 ms/item and 13 ms/item. Presumably, this improvement in efficiency was mainly due to the homogeneity of the distracters but also because of a greater dissimilarity between targets and distracters in Experiment 3 compared with Experiment 2. However, despite the relatively flat search functions, this kind of search performance can, at best, only be described as “quite efficient” (Wolfe, 1998). In other words, the changes made to the stimuli were not sufficient for the targets to “pop out” of the displays. It is unlikely, therefore, that an absence of an efficiency advantage for staring eyes over averted eyes is due to ceiling performance in searches for the latter. 
The findings of Experiments 2 and 3 clearly do not support the hypothesis that faces with staring eyes enjoy some kind of attentional bias that allows them to be processed more efficiently than averted gazes in crowded displays. Instead, it seems likely that the observed advantage for locating staring eyes versus averted-gaze targets in Experiment 1 can be attributed to different rates of scanning through the distracter sets used in the respective search conditions. This may be because the distracting items used in staring-gaze searches are more similar to one another than are the distracters used in averted-gaze searches. Alternatively, it may be that participants find it more difficult to search for a target through arrays containing faces with staring eyes than through arrays that only contain averted gazes, all else being equal (i.e., distracter-distracter similarity, target-distracter similarity), for attention is less readily disengaged from staring as opposed to averted gazes (Senju & Hasegawa, 2005). Experiments 4 and 5 examine the extent to which this disengagement effect is likely to have contributed to the stare-in-the-crowd effect found in Experiment 1
Experiment 4: Equated targets
In Experiment 4, participants searched for the same targets (upward-gazing eyes) through the distracter sets used in Experiment 1. In one condition, the distracters comprised averted and staring eyes, and in the other, they comprised averted and downward-gazing eyes. If the search asymmetry observed in Experiment 1 was caused by the presence of staring eyes in one distracter set (i.e., when faces with averted gaze were the targets) but not the other (when faces with staring eyes were the targets), an asymmetry should be observed in the following experiment even though search is always for the same target. Furthermore, this asymmetry ought to be observed in target-absent as well as target-present displays; in target-absent searches, observers must also search through a set containing staring gazes in one condition but not the other. 
Method
Participants
The participants in Experiment 4 were 12 students and visitors (five female) recruited from the University of Stirling campus. 
Materials and apparatus
The computer hardware and software were the same as in previous experiments. The face images were also the same but combined in a different manner to produce the visual search arrays. The target image always had upward-looking eyes either with a left or right face orientation. The distracters were either downward- and staring-eyes faces or downward- and averted-eyes faces of the same orientation as the target. An example array is shown in Figure 8
Figure 8
 
Example stimulus arrays from Experiment 4 with (left panel) an upward-gaze target and staring eyes and downward-gaze distracters and (right panel) an upward-gaze target and averted and downward-gaze distracters.
Figure 8
 
Example stimulus arrays from Experiment 4 with (left panel) an upward-gaze target and staring eyes and downward-gaze distracters and (right panel) an upward-gaze target and averted and downward-gaze distracters.
Design and procedure
This experiment had three factors: distracter gaze direction (staring or averted), target status (present or absent), and array size (four or eight items). The experiment had the same structure as Experiments 1, 2, and 3, i.e., four blocks of 32 trials preceded by eight practice trials. The four blocks were staring distracters/left orientation faces, staring distracters/right orientation faces, averted distracters/left orientation faces, and averted distracters/right orientation faces. The target of the search was always a face with upward-gazing eyes. Half of the trials in each block were target present, and half were target absent. The ordering of blocks and trials within blocks was completely randomized. The timing of the presentation of stimuli and feedback was exactly the same as in the previous experiments. Again, participants responded using two fingers from their preferred hand according to whether the target was present or absent. They were shown a picture of their target before each block of trials and were asked to respond as quickly and accurately as possible. 
Results
The data from one participant was excluded from all analyses because he performed at or below chance level (50%) in two conditions of the experiment. The interparticipant means of the median reaction times from the remaining participants are shown in Figure 9 along with the proportion of errors in each condition. For both target-present and target-absent conditions, there are clear search asymmetries: The effect of increasing array size is greater when participants searched for upward-gaze targets amongst distracter arrays containing faces with a staring gaze than when they searched for the same targets through distracter arrays containing only averted-gaze faces. 
Figure 9
 
Mean correct reaction times and error rates for each condition of Experiment 4. Participants always searched for upward gazes. Solid symbols represent target-present displays and open symbols target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. Error bars indicate the standard error of the mean.
Figure 9
 
Mean correct reaction times and error rates for each condition of Experiment 4. Participants always searched for upward gazes. Solid symbols represent target-present displays and open symbols target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. Error bars indicate the standard error of the mean.
The results of a 2 (distracter gaze direction) × 2 (target status) × 2 (array size) ANOVA conducted on the median RT data supported these observations. Participants were faster to locate targets in arrays containing averted-gaze distracters (M = 1319 ms) than in arrays containing staring gazes as distracters (M = 1660 ms), F(1, 15) = 42.83, MSE = 86,730.13, p < 0.001. Overall search latency was shorter in displays containing a target (M = 1255 ms) than in those that did not (M = 1725 ms), F(1, 15) = 85.51, MSE = 82,360.82, p < 0.001. The main affect of array size was also significant, F(1, 15) = 189.58, MSE = 38,761.08, p < 0.001, with faster searches through the small (M = 1250 ms) compared with the large arrays (M = 1730 ms). However, these main effects were qualified by significant interactions between distracter gaze direction and target status, F(1, 15) = 6.17, MSE = 15,295.81, p < 0.05; target status and array size, F(1, 15) = 33.52, MSE = 23,652.74, p < 0.001; and most importantly, distracter gaze direction and array size, F(1, 15) = 22.43, MSE = 30,068.15, p < 0.001. Follow-up analysis of the latter interaction confirmed that increasing array size slowed performance for searches through arrays containing averted as well as direct gazes; however, increasing the size of the arrays from four to eight items slowed search performance by some 624 ms for arrays containing staring-gaze distracters but by only 334 ms for arrays containing only averted-gaze distracters. 
The error data for Experiment 4 are shown in Figure 9. Again, participants were much more likely to miss the presence of a target (7.8% of all target-present trials) than to make a false positive response (1% of all target-absent trials). Indeed, the participants did not make any errors at all in the target-absent trials with the larger array size. This may indicate that they were trading off speed in order to maintain accuracy with these arrays. However, this seems to have been the case for both types of distracter, so it cannot account for the more efficient slopes obtained when participants searched through staring eyes. 
Discussion
Experiment 4 required participants to search for one type of target (upward eyes) while distracter types varied between blocks of trials. As in Experiment 1, the distracter sets consisted of either two different types of averted eyes in one condition or a combination of averted eyes and staring eyes in the other. Under these conditions, the search asymmetry observed in Experiment 1 was replicated: Search was more efficient with arrays containing only averted-gaze distracters than when the arrays contained faces with staring gazes. Given that the target of search was identical in each condition, it is tempting to conclude that the asymmetry was caused by relatively inefficient search through the distracter set that contained staring eyes. This evidence is consistent with the suggestion that, once attended, staring gazes retain attention for longer than averted gazes. 
However, it is worth considering two other explanations for the results of Experiment 4 that do not rely on staring gazes having any special status with regards to retaining visual attention. First, an asymmetry in favor of search through arrays containing averted-gaze distracters would also be obtained if these distracting items (laterally averted and downward gazes) were visually more similar to one another than were the distracters in the other search condition (staring and downward gazes). Where distracters are more similar to one another, targets may be found relatively swiftly because of an increase in the signal-to-noise ratio for the targets or perhaps because under these conditions nontargets can be easily rejected en masse. 
A second explanation for the pattern of results of Experiment 4 is that search was less efficient in arrays containing staring-gaze distracters than in arrays containing laterally averted gazes because of a mismatch in target-distracter similarity between these conditions. More specifically, it may be that the upward-gaze targets were visually more similar to the staring-gaze distracters than they were to the laterally averted–gaze distracters. Upward-gaze targets would therefore not stand out as well in arrays containing staring-gaze distracters as they would in arrays containing only averted-gaze distracters. All models of visual search discussed earlier would predict a reduced search efficiency under these circumstances. 
Clearly, mismatches in target-distracter similarity or in distracter-distracter similarity could have produced the search asymmetry in Experiment 4 independent of any special attention-retaining property of staring gazes in virtue of their biological significance. Experiment 5 was therefore designed to test whether the search through staring-eyes distracters is relatively less efficient than search through averted-gaze distracters under conditions in which target-distracter similarity and distracter-distracter similarity are better controlled across conditions. 
Experiment 5
In this experiment, participants searched for a cross-eyed target among a set of homogenous direct-gaze distracters in one block of trials and among homogenous averted-gaze distracters in a second block, thus controlling distracter-distracter similarity. The stimuli in this experiment were also created with a view to producing displays in which the target-distracter similarity was matched across both search conditions. An additional array size of two items was also included in order to obtain more precise estimates of search slopes of the function relating reaction time and array size for each of the search conditions in the experiment. 
Should search slopes for cross-eyed targets be flatter (i.e., less affected by array size) for averted-gaze distracters than for staring-gaze distracters, this would provide further evidence that the stare-in-the-crowd effect is actually a function of the distracting items rather than any prioritizing of staring gazes by visual attention. 
Method
Participants
These were 18 undergraduate students at the University of Stirling (12 females). All had normal or corrected-to-normal vision. 
Materials and apparatus
Digital photographs were taken of a female model with her head oriented to the right, gazing back toward her left (i.e., the staring-gaze item), to her right (the averted-gaze item), and while crossing her eyes. The eye regions were cut from each of these faces and pasted onto a standard head template using Adobe Photoshop. All faces were therefore identical save for the eye regions (see Figure 10 for examples of these images). Arrays of sizes two, four, and eight items were created using these faces. The arrays of sizes four and eight items were identical in size and spatial arrangement to those in Experiments 1, 2, 3, and 4. The items in the size two arrays always appeared in locations directly opposite one another; thus, there were four different types of arrays of this size. Two sets of target-absent arrays were created for each array size: one containing only staring-gaze items and another containing only averted-gaze items. In target-present arrays, a cross-eyed target replaced one of the distracter items such that, for each array size, the target appeared equally often in each possible location. 
Figure 10
 
Example stimulus arrays from Experiment 5 with (left panel) a cross-eyed target and staring-gaze distracters and (right panel) a cross-eyed target and averted-gaze distracters. (We were unable to obtain permission to publish images of the face used in the original arrays; however, the published figure represents an accurate rendering of the equivalent arrays from the experiment using the face of a different female individual from whom consent was obtained.)
Figure 10
 
Example stimulus arrays from Experiment 5 with (left panel) a cross-eyed target and staring-gaze distracters and (right panel) a cross-eyed target and averted-gaze distracters. (We were unable to obtain permission to publish images of the face used in the original arrays; however, the published figure represents an accurate rendering of the equivalent arrays from the experiment using the face of a different female individual from whom consent was obtained.)
Design and procedure
The materials were tested in a within-subjects design with target (present versus absent), distracter (averted eyes versus staring eyes), and array size (two, four, or eight items) as factors. Participants completed 192 trials, comprising 16 trials in each of the 12 cells of the design. Half of the participants completed two identical blocks of 48 trials with averted-gaze distracters, followed by two blocks of 48 trials with staring-gaze distracters. The other half of the participants completed the staring-gaze distracter blocks first. Trial order within each block was randomized. Participants completed 24 practice trials before each pair of experimental blocks. Other aspects of the procedure were identical to Experiments 1, 2, 3, and 4
Results
As in previous experiments, we first computed the median RT for each participant in each condition of the experiment. The interparticipant means of these median RTs are illustrated in Figure 11. We also obtained estimates of search slopes for the conditions of interest. Separate linear regressions were performed for each participant in each of the four experimental conditions obtained by crossing the target factor (present vs. absent) with the distracter factor (staring vs. averted gaze). Each regression had array size as the independent variable and RT as the dependent variable. Slopes were obtained from the regression equations, and the mean slope in each condition was then computed across participants. These slopes are also reported in Figure 11. It is clear from this figure that the search slope for cross-eyed gazes among staring-gaze distracters (116 ms/item for target-present trials) is quite similar to that for the same target among averted-gaze distracters (120 ms/item). Indeed, contrary to the experimental hypothesis, search is slightly more efficient through staring-gaze distracters for both arrays containing a target and those in which the target was absent. 
Figure 11
 
Mean correct reaction times and error rates for each condition of Experiment 5. Participants always searched for cross-eyed gazes. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. The RT plot also displays linear trends obtained from linear regression analysis. Error bars indicate the standard error of the mean.
Figure 11
 
Mean correct reaction times and error rates for each condition of Experiment 5. Participants always searched for cross-eyed gazes. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. The RT plot also displays linear trends obtained from linear regression analysis. Error bars indicate the standard error of the mean.
The median RT data were entered into a repeated-measures ANOVA with target (present vs. absent), distracter (averted vs. staring eyes), and array size (two, four, and eight items) as factors; the results of which supported the above observations. There were main effects of target, F(1, 23) = 138.83, MSE = 52,107.75, p < 0.001; distracter, F(1, 23) = 15.39, MSE = 50,412.73, p < 0.001; and array size, F(2, 46) = 358.43, MSE = 61,918.59, p < 0.001. These main effects were qualified by significant interactions between target and array size, F(2, 46) = 87.52, MSE = 20,547.45, p < 0.001, and between distracter and array size, F(2, 46) = 4.22, MSE = 15,697.56, p < 0.05, the latter interaction arising because of a greater effect of array size for displays with averted- versus direct-gaze distracters (i.e., contrary to the experimental hypothesis). The three-way interaction did not approach significance (p = 0.82). 
The error data for Experiment 5 are also shown in Figure 11. Once again, the overall error rate was low (6%), and participants were more likely to make miss errors (9.4% of all target-present trials) than false positive responses (3.2% of all target-absent trials). As is clear from Figure 11, there was no indication in these data that participants were trading speed for accuracy. 
Discussion
The results of this experiment failed to support the hypothesis that search efficiency would be poorer through arrays containing staring-gaze distracters than through averted-gaze distracters. In fact, the data suggest the opposite: Search was actually marginally easier through averted gazes than through direct gazes although this difference was not statistically significant. Moreover, this result was obtained in an experiment in which the distracters were homogenous and the similarity between the target and distracters was matched across the search conditions. There is no evidence here that staring-gaze distracters somehow retain attention for longer than averted-gaze distracters, at least in visual search. Once again, as in Experiments 2 and 3, controlling the visual similarity between targets and distracters, on the one hand, and between distracting items on the other seems to eliminate any search asymmetry. 
General discussion
The present study was designed to investigate whether staring eyes draw attention to their location. Previous research (Senju et al., 2005; von Grünau & Anston, 1995) had reported a search asymmetry: Search for staring eyes among averted-gaze distracters was more efficient than search for averted eyes among staring gaze distracters. Experiment 1 in the present study replicated this finding. However, it is not clear whether this stare-in-the-crowd effect is due to attention somehow prioritizing a staring-gaze target during visual search or whether it arises because of a relatively inefficient search through distracting items that include staring gazes. Experiments 2 and 3 equated the distracters through which search for, respectively, staring-gaze and averted-gaze targets had to be conducted. Under these conditions, the apparent search advantage for staring eyes disappeared, suggesting that the stare-in-the-crowd effect observed in Experiment 1 is more likely due to the nature of the distracting items than some attentional priority for staring gazes. This conclusion was bolstered by the findings of Experiment 4, which compared search for an upward-gaze target through the two different distracting arrays used in Experiment 1: one containing averted gazes and one containing staring gazes. Under these conditions, the search asymmetry that had been observed in Experiment 1 returned. However, the suggestion that this asymmetry was the result of staring gazes retaining attention in comparison to averted gazes found no support from the results of Experiment 5. Here, search through staring-gaze distracters was no less efficient than search through averted-gaze distracters when the similarity between cross-eyed targets and, respectively, staring and averted distracters was matched. 
Taken together, Experiments 2, 3, 4, and 5 provide strong evidence that the search asymmetry observed in Experiment 1 and in previous studies (Senju et al., 2005; von Grünau & Anston, 1995) is a function of the relationship between the visual properties of targets and distracters and not due to any special attention-grabbing or attention-retaining properties of staring gazes. There was no evidence that staring eyes are prioritized, somehow drawing attention to their location: Search for the staring-eyes target was no more efficient than for the averted-eyes target when the distracters were equated in Experiments 2 and 3. Neither was there was strong evidence that staring eyes somehow hold attention: Although searches were less efficient through staring-gaze distracters than through averted-gaze distracters in Experiment 4, this asymmetry disappeared and almost reversed when the target-distracter and distracter-distracter similarity was controlled between search conditions in Experiment 5. The ease with which a staring gaze can be found therefore depends upon the same set of variables that influence any kind of visual search: the ease with which targets can be discriminated from distracters and the degree of similarity between distracters. Such a conclusion is consistent with all of the models of visual search described in the Introduction. In Guided Search, these factors influence the bottom-up activation in the saliency map. In TVA, the effect of making targets and distracters more similar and distracters more heterogenous will be to place more demands on both the feature-based and category-based selection mechanisms, which results in a reduction in search efficiency. Finally, in unlimited capacity models, the effect of these manipulations will be to increase the degree of internal uncertainty that a target element in a display is a target and that a distracter is a distracter. There will be greater overlap between the internal response distributions for target-present and target-absent trials and a consequent deterioration in performance. 
A corollary of our findings is that, perhaps contrary to our intuitions about sensing when we are being watched, staring gazes are not processed outside of the focus of visual attention. Indeed, there is now converging evidence for the more general conclusion that gaze direction per se cannot be processed preattentively. In a series of experiments, Burton, Bindemann, Langton, Schweinberger, and Jenkins (2009) showed that while speeded directional judgments to pointing hands or left-right gazing faces were slowed by to-be-ignored incongruently directed pointing hands, no such interference was produced by to-be-ignored gazing faces or gazing eyes alone. Ricciardelli and Turatto (2011) asked participants to judge whether an easy-to-perceive or hard-to-perceive pair of eyes were looking to the left or right. Just prior to the eyes appearing, participants' focal attention was cued to either the location on the screen where the eyes were to appear or to an invalid screen location. They found that cue validity and target difficulty produced additive effects on RT, suggesting that the perceptual encoding and classification of left/right gazes requires input attention. Given these results and the present findings, it seems that, at least regarding their ability to capture attention, there is nothing particularly special about the eyes or the direction in which they are pointing. 
We also found no evidence that direct gazes were better able to retain attention than averted gazes. In Experiment 5, search for a cross-eyed target among staring-gaze faces was found to be equivalent in efficiency to search for the same target through averted-gaze faces. This seems to contradict findings from Senju and Hasegawa (2005), who showed that speeded key-press responses contingent on the identity of peripheral targets were slower if the stimulus at fixation was a face with a staring gaze as opposed to one with an averted gaze. Why should participants have relative difficulty disengaging attention from direct-gaze faces in Senju and Hasegawa's experiments but have no more difficulty engaging attention to and disengaging attention from staring-gaze distracters than averted-gaze distracters in visual search? One possibility is that the rise time of the disengagement effect may be quite long. In Senju and Hasegawa's studies, for example, participants fixated the central staring or averted-gaze faces for 500 ms before the to-be-responded-to peripheral target appeared. The search slopes in target-absent conditions in Experiment 5 were around 120 ms/item for target-present searches and 200 ms/item for target-absent displays. Thus, it may be that attention does not dwell for long enough on the direct-gaze faces in the visual search paradigm for the disengagement effect to play a role. 
Although our data led to the conclusion that there is nothing particularly special about staring gazes in terms of their influence on visual search, eye gaze is, arguably, a special stimulus in certain other respects: Having first been fixated, another's averted gaze has been found to trigger a subsequent shift of attention toward the gazed-at location (Driver et al., 1999; Friesen & Kingstone, 1998; Langton & Bruce, 1999), and staring gazes have been found to modulate a number of other face-related tasks. For example, direct gaze has been found to facilitate gender discrimination (Macrae, Hood, Milne, Rowe, & Mason, 2002), the categorization of certain facial expressions (Adams & Kleck, 2003; Bindemann, Burton, & Langton, 2008), and the encoding of faces into memory (Mason, Hood, & Macrae, 2004; Nakashima, Langton, & Yoshikawa, 2012). In their review of this body of work, Senju and Johnson (2009) argue that underpinning this set of effects is a subcortical pathway, including the superior colliculus, the pulvinar, and the amygdala, which rapidly detects eye contact and then influences subsequent activity through projections to other structures comprising “the social brain” (e.g., the superior temporal sulcus, the fusiform gyrus, and the medial prefrontal cortex). This they term the “fast track modulator model.” 
It is important to emphasize that our data do not speak against the existence of this kind of mechanism; our claim is rather that it does not operate to influence visual search. Instead, it may be that this mechanism operates to produce the overall RT advantage found for staring gazes in Experiments 1 and 2 and in other papers reporting the stare-in-the-crowd effect (Conty et al., 2006; Doi et al., 2009; Doi & Ueda, 2007; Palanica & Itier, 2011; Senju et al., 2005). If, as we have argued, attention is not guided preferentially to the location of a staring-gaze face, this subcortical route would have to operate to influence a stage or stages of processing after attention has selected the target item, somehow priming those mechanisms that operate to select and execute the appropriate key-press response. 
One problem with this explanation for the overall RT advantage for staring-gaze searches is that it is not obvious why the priming of mechanisms designed to extract socially relevant information from a face would be implicated in the selection and execution of a target-present key-press response in visual search. An alternative explanation for the overall RT effect is one that was rejected by Senju and Johnson (2009). This is that, once selected and categorized, a staring-gaze face activates arousal mechanisms in the brain, which then influence subsequent cognitive processing, including the speeded execution of a key-press response. However, this model faces at least two problems. First, the effect is absent in Experiments 3, 4, and 5; one would have thought that the mere presence of staring gazes in search displays, whether appearing as targets or distracters, would have produced some general heightening of arousal. Second, the overall RT advantage also appears for target-absent displays in Experiment 1. In other words, participants were quicker to respond “no” to displays in which no staring gazes appeared (target-absent displays in staring-gaze searches) than to displays in which several staring-gaze distracters will have appeared (target-absent displays in averted-gaze searches). The arousal model could perhaps be rescued if the overall RT advantage for searching for staring gazes was not actually produced by the appearance of direct gazes in the displays; participants may be generally faster to initiate the search process or may be more motivated to make a speeded response having completed the search when the target they have in mind (i.e., in working memory) is a face with a staring gaze than when it is a face with an averted gaze. 
A rather more prosaic explanation for the overall RT advantage for staring-gaze searches is that once the likely respective targets have been selected by attention (i.e., after termination of the search), there will be relatively greater certainty about the status of a staring gaze stimulus as a target in search for a staring-gaze face than there is of an averted-gaze stimulus being the target in search for an averted-gaze face; one might wonder, for example, whether a potential averted-gaze target is the right kind of averted gaze whereas there is no such ambiguity with a staring gaze. Likewise, when one has terminated a search for an averted gaze in a target-absent display, this uncertainty might prompt one to revisit certain items that might have been flagged as potential targets before a negative response is issued. Again, there may be less reason to engage in this checking process when searching in vain for a staring-gaze face. This model therefore explains the overall RT effects in target-present and target-absent displays of Experiments 1 and 2. The absence of this kind of effect in Experiments 3, 4, and 5 can be explained because there is little ambiguity about the identity of the targets in these studies. In Experiments 3 and 4, for example, the target is simply the odd one out; one does not have to deliberate about whether it's the right kind of odd one out. In Experiment 4, the target (upward gaze) remains the same across the entire experiment and may therefore be unambiguous for this reason. 
It is worth stressing that our main conclusion—that the stare-in-the-crowd effect arises from the particular choices of target and distracter stimuli rather than any special status of direct gazes—is also consistent with a theory that essentially eliminates the role of covert attention in visual search. Rosenholtz and colleagues (e.g., Rosenholtz, Huang, & Ehinger, 2012) see visual search phenomena as arising from an output of early vision, which, they argue, contains significant information loss. So, rather than postulating a selective mechanism governing access to a limited-capacity channel, their solution is to have early vision compress the signal, which is then sent through the limited-capacity channel. What goes into the channel is determined, not by covert selective attention, but by where one fixates one's eyes. In their view, on each fixation in a search task, vision computes a set of summary statistics over local pooling regions or “patches” (which may contain multiple items), the sizes of which grow with increasing distance from fixation, resulting in greater information loss in the periphery of vision. If the summary statistics in a peripheral region are sufficient to discriminate a target from a nontarget, then the observer's eyes will be guided toward the target; if this distinction cannot be made, then search continues without guidance. In this view, the stare-in-the-crowd effect observed in Experiment 1 presumably arises because the peripheral pooling regions in the search for staring gazes are more likely to yield statistics that distinguish target regions from nontarget regions than do the pooling regions in displays where averted gazes are the targets. When the summary statistics yielded by peripheral regions are better controlled across search conditions, as in Experiments 2 and 3, then search for staring and averted gazes becomes equivalently difficult. Again, it is the visual properties of the targets and distracters that is important rather than their statuses as self-directed versus other-directed gazes. 
A strength of this kind of theory is that it can account for empirical results that more traditional early-selection models struggle to explain, such as the extremely rapid categorization of the gist of a scene (Rousselet, Joubert, & Fabre-Thorpe, 2005) and rapid detection of objects, such as animals or vehicles within scenes (Kirchner & Thorpe, 2006; VanRullen & Thorpe, 2001). Indeed, in the real world, staring gazes will usually appear in such scenes where “distracters” will be far from the uniform stimuli used in our experiments. Is it possible that faces with staring gazes will capture (overt or covert) attention in more natural situations? There is ample evidence that observers tend to direct initial fixations to the eyes and heads of people depicted in these scenes (Birmingham, Bischof, & Kingstone, 2008; Cerf, Harel, Einhäuser, & Koch, 2008) and that visual saliency does not predict this behavior (Birmingham, Bischof, & Kingstone, 2009), suggesting instead that people fixate these regions because they are such a rich source of social information. However, it is unclear whether a face with a staring gaze will be more likely to attract a first fixation than a face with an averted gaze. Indeed, given the evidence that gaze direction cannot be processed in the absence of focal attention, we suspect not. Instead, fixations may first be directed toward the face/eyes regardless of where the face is looking. Subsequent fixations will then be oriented in the direction toward which the depicted person is gazing (Fletcher-Watson, Findlay, Leekam, & Benson, 2008). 
To conclude, the results of the present study suggest that, at least in visual search, staring gazes are not afforded any special status by visual attention; staring gazes are not prioritized by attention during visual search, and neither are these stimuli better able to retain attention than averted gazes. Instead, reports of the stare-in-the-crowd effect in the literature are more likely to be the result of failures to control distracter-distracter similarity across search conditions than any attention-grabbing property of staring gazes. 
Acknowledgments
This research was partly supported by an ESRC research grant (ES/1034803/1) awarded to SRHL. The authors thank Derrick Watson and an anonymous reviewer for their helpful comments on an earlier version of this manuscript and Alex McIntyre, who collected the data for Experiment 5
Commercial relationships: none. 
Corresponding author: Stephen R. H. Langton. 
Email: srhl1@stir.ac.uk. 
Address: School of Natural Sciences, University of Stirling, Stirling, UK. 
References
Adams R. B. Kleck R. E. (2003). Perceived gaze direction and the processing of facial displays of emotion. Psychological Science, 14 (6), 644. [CrossRef] [PubMed]
Ando S. (2004). Perception of gaze direction based on luminance ratio. Perception, 33 (10), 1173–1184. [CrossRef] [PubMed]
Anstis S. M. Mayhew J. W. Morley T. (1969). The perception of where a face or television “portrait” is looking. American Journal of Psychology, 82, 474–489. [CrossRef] [PubMed]
Bindemann M. Burton A. M. Langton S. R. H. (2008). How do eye gaze and facial expression interact? Visual Cognition, 16 (6), 708–733. [CrossRef]
Birmingham E. Bischof W. F. Kingstone A. (2008). Gaze selection in complex social scenes. Visual Cognition, 16 (2–3), 341–355. [CrossRef]
Birmingham E. Bischof W. F. Kingstone A. (2009). Saliency does not account for fixations to eyes within social scenes. Vision Research, 49 (24), 2992–3000. [CrossRef] [PubMed]
Bundesen C. (1990). A theory of visual attention. Psychological Review, 97 (4), 523. [CrossRef] [PubMed]
Burton A. M. Bindemann M. Langton S. R. H. Schweinberger S. R. Jenkins R. (2009). Gaze perception requires focused attention: Evidence from an interference task. Journal of Experimental Psychology: Human Perception and Performance, 35 (1), 108–118. [CrossRef] [PubMed]
Carrasco M. (2011). Visual attention: The past 25 years. Vision Research, 51 (13), 1484–1525. [CrossRef] [PubMed]
Cave K. R. Batty M. J. (2006). From searching for features to searching for threat: Drawing the boundary between preattentive and attentive vision. Visual Cognition, 14 (4–8), 629–646. [CrossRef]
Cerf M. Harel J. Einhäuser W. Koch C. (2008). Predicting human gaze using low-level saliency combined with face detection. In Platt J. Koller D. Singer Y. Roweis S. (Eds.), Advances in neural information processing systems (Vol. 20) (pp. 241–248). Cambridge, MA: MIT Press.
Cline M. G. (1967). The perception of where a person is looking. American Journal of Psychology, 80, 41–50. [CrossRef] [PubMed]
Conty L. Tijus C. Hugueville L. Coelho E. George N. (2006). Searching for asymmetries in the detection of gaze contact versus averted gaze under different head views: A behavioural study. Spatial Vision, 19 (6), 529–545. [CrossRef] [PubMed]
Doi H. Ueda K. (2007). Searching for a perceived stare in the crowd. Perception, 36 (5), 773–780. [CrossRef] [PubMed]
Doi H. Ueda K. Shinohara K. (2009). Neural correlates of the stare-in-the-crowd effect. Neuropsychologia, 47 (4), 1053–1060. [CrossRef] [PubMed]
Dosher B. A. (1998). Models of visual search: Finding a face in the crowd. In Osherson D. N. Sternberg S. (Eds.), An invitation to cognitive science (Vol. 4) (pp. 455–521). Cambridge, MA: MIT Press.
Dosher B. A. Han S. Lu Z. L. (2004). Parallel processing in visual search asymmetry. Journal of Experimental Psychology: Human Perception and Performance, 30 (1), 3–27. [CrossRef] [PubMed]
Driver J. Davis G. Ricciardelli P. Kidd P. Maxwell E. Baron-Cohen S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6 (5), 509–540. [CrossRef]
Duncan J. Humphreys G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96 (3), 433. [CrossRef] [PubMed]
Eckstein M. P. (2011). Visual search: A retrospective. Journal of Vision, 11 (5): 14, 1–36, http://www.journalofvision.org/content/11/5/14, doi:10.1167/11.5.14. [PubMed] [Article] [CrossRef] [PubMed]
Fletcher-Watson S. Findlay J. M. Leekam S. R. Benson V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37 (4), 571–583. [CrossRef] [PubMed]
Fox E. Lester V. Russo R. Bowles R. J. Pichler A. Dutton K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion, 14 (1), 61–92. [CrossRef] [PubMed]
Friesen C. K. Kingstone A. (1998). The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5 (3), 490–495. [CrossRef]
Hershler O. Hochstein S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45 (13), 1707–1724. [CrossRef] [PubMed]
Horstmann G. Scharlau I. Ansorge U. (2006). More efficient rejection of happy than of angry face distractors in visual search. Psychonomic Bulletin & Review, 13 (6), 1067–1073. [CrossRef] [PubMed]
Kirchner H. Thorpe S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46 (11), 1762–1776. [CrossRef] [PubMed]
Langton S. R. H. Bruce V. (1999). Reflexive visual orienting in response to the social attention of others. Visual Cognition, 6 (5), 541–567. [CrossRef]
Langton S. R. H. Watt R. J. Bruce V. (2000). Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, 4 (2), 50–59. [CrossRef] [PubMed]
Macrae C. N. Hood B. M. Milne A. B. Rowe A. C. Mason M. F. (2002). Are you looking at me? Eye gaze and person perception. Psychological Science, 13 (5), 460. [CrossRef] [PubMed]
Mason M. Hood B. Macrae C. N. (2004). Look into my eyes: Gaze direction and person memory. Memory, 12 (5), 637–643. [CrossRef] [PubMed]
McElree B. Carrasco M. (1999). The temporal dynamics of visual search: Evidence for parallel processing in feature and conjunction searches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1517–1539. [CrossRef] [PubMed]
Nakashima S. F. Langton S. R. H. Yoshikawa S. (2012). The effect of facial expression and gaze direction on memory for unfamiliar faces. Cognition & Emotion, 26, 1316–1325. [CrossRef] [PubMed]
Öhman A. Flykt A. Esteves F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130 (3), 466. [CrossRef] [PubMed]
Palanica A. Itier R. J. (2011). Searching for a perceived gaze direction using eye tracking. Journal of Vision, 11 (2): 19, 1–13, http://www.journalofvision.org/content/11/2/19, doi:10.1167/11.2.19. [PubMed] [Article]. [CrossRef] [PubMed]
Palmer J. Ames C. T. Lindsey D. T. (1993). Measuring the effect of attention on simple visual search. Journal of Experimental Psychology: Human Perception and Performance, 19 (1), 108. [CrossRef] [PubMed]
Palmer J. Verghese P. Pavel M. (2000). The psychophysics of visual search. Vision Research, 40 (10), 1227–1268. [CrossRef] [PubMed]
Ricciardelli P. Turatto M. (2011). Is attention necessary for perceiving gaze direction? It depends on how you look at it: Evidence from the locus-of-slack method. Visual Cognition, 19 (2), 154–170. [CrossRef]
Rosenholtz R. Huang J. Ehinger K. A. (2012). Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology, 3, 1–15. [CrossRef] [PubMed]
Rosenholtz R. Huang J. Raj A. Balas B. J. Ilie L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of Vision, 12 (4): 14, 1–17, http://www.journalofvision.org/content/12/4/14, doi:10.1167/12.4.14. [PubMed] [Article] [CrossRef] [PubMed]
Rousselet G. Joubert O. Fabre-Thorpe M. (2005). How long to get to the “gist” of real-world natural scenes? Visual Cognition, 12 (6), 852–877. [CrossRef]
Scholl B. J. (2001). Objects and attention: The state of the art. Cognition, 80 (1–2), 1–46. [CrossRef] [PubMed]
Senju A. Hasegawa T. (2005). Direct gaze captures visuospatial attention. Visual Cognition, 12 (1), 127–144. [CrossRef]
Senju A. Hasegawa T. Tojo Y. (2005). Does perceived direct gaze boost detection in adults and children with and without autism? The stare-in-the-crowd effect revisited. Visual Cognition, 12 (8), 1474–1496. [CrossRef]
Senju A. Johnson M. H. (2009). The eye contact effect: Mechanisms and development. Trends in Cognitive Sciences, 13 (3), 127–134. [CrossRef] [PubMed]
Snodgrass J. G. Townsend J. T. (1980). Comparing parallel and serial models: Theory and implementation. Journal of Experimental Psychology: Human Perception and Performance, 6 (2), 330. [CrossRef]
Townsend J. T. (1990). Serial vs. parallel processing: Sometimes they look like Tweedledum and Tweedledee but they can (and should) be distinguished. Psychological Science, 1 (1), 46–54. [CrossRef]
Treisman A. M. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12 (1), 97–136. [CrossRef] [PubMed]
VanRullen R. Thorpe S. J. (2001). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30 (6), 655–668. [CrossRef] [PubMed]
Verghese P. (2001). Visual search and attention: A signal detection theory approach. Neuron, 31 (4), 523–535. [CrossRef] [PubMed]
von Grünau M. Anston C. (1995). The detection of gaze direction: A stare-in-the-crowd effect. Perception, 24 (11), 1297–1313. [CrossRef] [PubMed]
Wolfe J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review, 1 (2), 202–238. [CrossRef] [PubMed]
Wolfe J. (1998). Visual search. In Pashler H. (Ed.), Attention (pp. 13–73). Hove, UK: Psychology Press.
Wolfe J. M. (2007). Guided search 4.0: Current progress with a model of visual search. In Gray W. D. (Ed.), Integrated models of cognitive systems (pp. 99–120). Oxford: Oxford University Press.
Wolfe J. Horowitz T. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5 (6), 495–501. [CrossRef] [PubMed]
Wolfe J. M. Horowitz T. S. Kenner N. Hyle M. Vasan N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44 (12), 1411–1426. [CrossRef] [PubMed]
Figure 1
 
Example stimuli from von Grünau and Anston (1995) with (left panel) a staring-eyes target among leftward and rightward averted-eyes distracters and (right panel) a rightward averted-eyes target among staring and leftward averted-eyes distracters (figure reproduced with permission from Pion Ltd., London, www.pion.co.uk).
Figure 1
 
Example stimuli from von Grünau and Anston (1995) with (left panel) a staring-eyes target among leftward and rightward averted-eyes distracters and (right panel) a rightward averted-eyes target among staring and leftward averted-eyes distracters (figure reproduced with permission from Pion Ltd., London, www.pion.co.uk).
Figure 2
 
Example stimulus array from Experiment 1 with (left panel) a staring-eyes target and (right panel) an averted-eyes target.
Figure 2
 
Example stimulus array from Experiment 1 with (left panel) a staring-eyes target and (right panel) an averted-eyes target.
Figure 3
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 1. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars represent the standard error of the mean.
Figure 3
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 1. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars represent the standard error of the mean.
Figure 4
 
Example stimulus array from Experiment 2 with (left panel) a staring-eyes target and (right panel) an averted-eyes target, both with upward-gaze and downward-gaze distracters.
Figure 4
 
Example stimulus array from Experiment 2 with (left panel) a staring-eyes target and (right panel) an averted-eyes target, both with upward-gaze and downward-gaze distracters.
Figure 5
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 2. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 5
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 2. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 6
 
Example stimulus arrays from Experiment 3 with (left panel) a staring-gaze target and (right panel) an averted-gaze target.
Figure 6
 
Example stimulus arrays from Experiment 3 with (left panel) a staring-gaze target and (right panel) an averted-gaze target.
Figure 7
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 3. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 7
 
Mean correct reaction times, search slopes, and error rates for each condition of Experiment 3. Solid symbols and lines represent target-present displays, and open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes were the targets and square symbols when averted gazes were targets. Error bars indicate the standard error of the mean.
Figure 8
 
Example stimulus arrays from Experiment 4 with (left panel) an upward-gaze target and staring eyes and downward-gaze distracters and (right panel) an upward-gaze target and averted and downward-gaze distracters.
Figure 8
 
Example stimulus arrays from Experiment 4 with (left panel) an upward-gaze target and staring eyes and downward-gaze distracters and (right panel) an upward-gaze target and averted and downward-gaze distracters.
Figure 9
 
Mean correct reaction times and error rates for each condition of Experiment 4. Participants always searched for upward gazes. Solid symbols represent target-present displays and open symbols target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. Error bars indicate the standard error of the mean.
Figure 9
 
Mean correct reaction times and error rates for each condition of Experiment 4. Participants always searched for upward gazes. Solid symbols represent target-present displays and open symbols target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. Error bars indicate the standard error of the mean.
Figure 10
 
Example stimulus arrays from Experiment 5 with (left panel) a cross-eyed target and staring-gaze distracters and (right panel) a cross-eyed target and averted-gaze distracters. (We were unable to obtain permission to publish images of the face used in the original arrays; however, the published figure represents an accurate rendering of the equivalent arrays from the experiment using the face of a different female individual from whom consent was obtained.)
Figure 10
 
Example stimulus arrays from Experiment 5 with (left panel) a cross-eyed target and staring-gaze distracters and (right panel) a cross-eyed target and averted-gaze distracters. (We were unable to obtain permission to publish images of the face used in the original arrays; however, the published figure represents an accurate rendering of the equivalent arrays from the experiment using the face of a different female individual from whom consent was obtained.)
Figure 11
 
Mean correct reaction times and error rates for each condition of Experiment 5. Participants always searched for cross-eyed gazes. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. The RT plot also displays linear trends obtained from linear regression analysis. Error bars indicate the standard error of the mean.
Figure 11
 
Mean correct reaction times and error rates for each condition of Experiment 5. Participants always searched for cross-eyed gazes. Solid symbols and lines represent target-present displays; open symbols and dashed lines represent target-absent displays. Circle symbols indicate trials in which staring gazes appeared in the distracter arrays and square symbols when the distracter arrays contained only averted gazes. The RT plot also displays linear trends obtained from linear regression analysis. Error bars indicate the standard error of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×