Free
Research Article  |   January 2010
The face-in-the-crowd effect: When angry faces are just cross(es)
Author Affiliations
Journal of Vision January 2010, Vol.10, 7. doi:10.1167/10.1.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Carlos M. Coelho, Steven Cloete, Guy Wallis; The face-in-the-crowd effect: When angry faces are just cross(es). Journal of Vision 2010;10(1):7. doi: 10.1167/10.1.7.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

A common theme running through much of the visual recognition literature is that faces are special. Many studies now describe evidence for the idea that faces are processed in a dedicated center in cortex. Studies have also argued for the presence of evolutionarily expedient pathways dedicated to the processing of certain facial expressions. Evidence for this proposal comes largely from visual search tasks which have established that threatening expressions are more rapidly detected than other expressions: the ‘face-in-the-crowd effect’. One open criticism of this effect is that it may be due to low-level visual artifacts, rather than biological preparedness. One attempt at controlling low-level differences has been to use schematic line-drawing versions of faces. This study aimed to discover if there might be alternative issues with schematic stimuli. The first study replicated the face-in-the-crowd threat advantage for schematic faces, but also measured a comparable effect using stimuli comprised of obliquely oriented lines. Similar results were achieved with these stimuli rotated, which had the effect of removing any residual resemblance to a face. The results suggest that low-level features probably underlie the face-in-the-crowd effect described for schematic face images, thereby undermining evidence for a search advantage for specific facial expressions.

Introduction
Certain visual stimuli provide cues which are crucial to an organism's survival. Many of these cues inform the organism of imminent danger, and are capable of provoking rapid responses such as recoil or blinking. Given the benefit of evoking a rapid as well as accurate response to such cues, it seems likely that evolution has fostered the development of systems capable of detecting danger as rapidly as possible. Specialized pathways have indeed been described for extremely rapid (20–40 ms) unidirectional responses such as the startle reflex (Yeomans, Li, Scott, & Frankland, 2002). The presence of very fast, specialized pathways for detecting imminent danger raises the possibility of other, more specific systems. Some researchers have suggested that visual stimuli which provoke a strong emotional response may be processed via rapid, dedicated systems capable of circumventing normal processing channels (e.g., LeDoux, 1996). In particular, some authors have argued that threatening facial expressions should be afforded this type of specialist processing (e.g., Esteves, Dimberg, & Öhman, 1994; Hansen & Hansen, 1988; LeDoux, 2003; Öhman, 2005). 
Based on the two-stage model of perception (Treisman, 1986) these researchers have argued that biologically relevant features might be scanned via a parallel, preattentive pathway. In particular, they suggest that negatively valenced or threatening facial stimuli might be processed in this way, in contrast to non-threatening facial stimuli which would be subject to resource-limited, serial processing, requiring focused attention (Hansen & Hansen, 1988; Niedenthal, 1990). They base their arguments, in part, on reports that non-threatening faces require longer to be found and identified than threatening ones (e.g., Fox et al., 2000; Tipples, Atkinson, & Young, 2002) and hence that humans process threatening stimuli in an accelerated or ‘quick-and-dirty way,’ as LeDoux (2003) describes it. 
Beyond the specific question of the accelerated processing of facial emotion, the processing advantage observed for threatening facial expressions (henceforth referred to as the ‘face-in-the-crowd’ effect) is taken by many as evidence for specialist processing of evolutionarily relevant stimuli, largely insulated from cognitive factors (Öhman & Mineka, 2001). Although all of the authors in this field are aware that a wide range of stimuli can come to rapidly trigger a fearful response, they would argue that faces and other biologically relevant stimuli are afforded special processing, enabling them to be detected quickly from birth. As Fox et al. (2000) states ‘…humans are biologically prepared or “hard-wired” for expression recognition, especially for the recognition of anger or threat.’ It is sentiments of this type that have helped the face-in-the-crowd effect support the current, ‘special’ (Schubö, Gendolla, Meinecke, & Abele, 2006) status of faces, evidence for which has also been drawn from neuropsychological disorders such as prosopagnosia (Farah, 1990), imaging studies (Kanwisher, McDermott, & Chun, 1997), and single cell recording studies (Tsao, Freiwald, Tootell, & Livingstone, 2006). Hence the face-in-the-crowd effect acts as a cornerstone for two important ideas: That threatening stimuli receive privileged processing via dedicated detection systems, and that faces are processed and recognized by specialist systems in cortex. 
By investigating and ultimately challenging evidence for the schematic face-in-the-crowd effect, our work adds to studies based on real face stimuli. When combined this work challenges the idea that faces are afforded either separate, or in some other way, privileged processing of their emotional content. 
The face-in-the-crowd effect
To better understand the advantage which angry faces enjoy in search tasks it is helpful to reflect on the discovery and subsequent testing of the effect. The first report of an advantage for angry faces appeared in a paper by Hansen and Hansen (1988). Their study used a standard visual search task in which the participants were instructed to detect the presence or absence of a specified target (e.g., an angry face) among irrelevant distractors (e.g., happy faces). The authors found 60 ms per item slopes for happy targets among angry distractors and 2 ms per item slopes for angry targets among happy distractors. They concluded that the results represented a preattentive, parallel search for information which specifies a threat. However, Purcell, Stewart, and Skov (1996) noted that the angry faces in the Hansen & Hansen study contained superfluous dark areas which were introduced when they transformed the Ekman and Friesen (1976) photographs of angry and happy faces into black-and-white sketches. When Purcell and colleagues removed the artifacts, no threat advantage was found. As Purcell et al.'s work shows, systematic changes in illumination, brightness, contrast etc. between stimulus categories (e.g. happy vs. angry) need to be carefully controlled in these types of task. 
Despite the early setbacks, studies have continued using real face stimuli based on grayscale or thresholded facial images. One attempt to counteract low-level explanations for the effect has been has been to invert (planar rotate) the faces. Previous studies with real faces have demonstrated that when faces are inverted, emotional expressions can no longer be rapidly identified (Thompson, 1980). The purpose of rotation is that the low level features remain the same under inversion, and so if the search advantage disappears, the low-level explanation cannot be valid. Although some early reports suggested that inversion removed the effect, more recent studies have produced equivocal results, with some continuing to report that the effect disappears after inversion (e.g. Williams & Mattingley, 2006), and others that it is slowed but still present after inversion (Calvo & Nummenmaa, 2008). The real face literature is further confused by the fact that although many studies report an advantage for angry faces (e.g. Williams & Mattingley, 2006), some report search advantages for happy faces (Calvo & Nummenmaa, 2008). This has lead some authors to propose that low-level effects continue to undermine the apparently emotion-based effect. In particular, Calvo and Nummenmaa (2008) used a well established model of bottom-up image saliency by Itti and Koch (2000), and found that they were able to predict search advantages for gray-scale images of faces using the model. 
An alternative approach to combating sources of low-level stimulus artifact in detection tasks has been to reduce the stimuli to their basic, schematic outlines, exemplified by the pioneering work on object recognition and scene analysis by Biederman and colleagues (Biederman, 1988; Biederman, Mezzanotte, & Rabinowitz, 1982). In a similar vein, and as a response to criticism of studies using real face stimuli, Öhman, Lundqvist, and Esteves (2001) looked for a threat advantage using schematic face stimuli. The authors compared angry, sad, happy and scheming faces and observed faster and more accurate detection of threatening faces. Unfortunately, in accordance with earlier studies of schematic stimuli (Nothdurft, 1993; White, 1995), Öhman et al. reported that the effect was found for both upright and inverted faces. Nonetheless, at least two other studies have reported that the threat advantage does disappear after inversion (Eastwood, Smilek, & Merikle, 2001; Fox et al., 2000). Investigations continue, many focusing on the use of schematic faces (e.g., Ashwin, Wheelwright, & Baron-Cohen, 2006; Mather & Knight, 2006; Reynolds, Eastwood, Partanan, Frischen, & Smilek, 2008), and in a recent, comprehensive re-examination of previous work, Horstmann (2007) found a consistent advantage for angry face targets among happy face distractors using a range of schematic faces described in earlier studies by Fox et al. (2000), Öhman et al. (2001), and White (1995). 
The studies presented in this paper test the hypothesis that it is the emotional content of faces that underlie the face-in-the-crowd effect for schematic faces. The underlying hypothesis is that if it is the emotional content that matters, one would expect that similar looking stimuli, in which the emotional content has been removed, would no longer produce wide variations in search times. To test this prediction we conducted three experiments. 
General methods
Participants were seated in a quiet room in front of a laptop PC computer at a viewing distance of approximately 60 cm. The stimuli were composed of nine individual schematic faces with each image slightly offset from a regular grid pattern by up to 1 degree of visual angle in any direction—see Figure 1. Although by no means typical of schematic face studies, we included this random spatial offset for the same reason as Horstmann (2007), namely, to hinder any broad-level textural processing and to reduce possible flanker artifacts. In target-absent trials the search array consisted of nine identical stimuli, and in target-present trials, the search target had a discrepant ‘emotional expression’ and appeared at each one of the nine locations in the array with equal probability. The stimuli were rendered in black against a white background. Each ‘face’ subtended approximately three degrees at the eye. Each experiment was comprised of 10 blocks of 36 trials each (360 trials). The images were drawn with anti-aliased lines, four pixels in width. Participants were tested individually and were instructed to use their dominant hand to press one key (“n”) if no discrepant stimulus was present, and the other key (“b”) if there was a discrepant stimulus. Detailed written instructions were presented on the computer screen beforehand and made no reference to emotional expressions. Participants were allowed to interrupt the task and take a short break if they wished. All participants were encouraged to respond as quickly and as accurately as possible. Each trial consisted of a fixation cross (+) presented at the center of the computer screen for 500 ms. This was immediately followed by the presentation of the search array, which was displayed until the participant responded, or until 5000 ms elapsed (in which case an error was recorded). The project was approved by the University of Queensland's Behavioral and Social Sciences Ethical Review Committee. 
Figure 1
 
Example of the staggered 3 × 3 stimulus matrix used in all experiments. In this case a target-present trial from Experiment 1 is shown.
Figure 1
 
Example of the staggered 3 × 3 stimulus matrix used in all experiments. In this case a target-present trial from Experiment 1 is shown.
Experiment 1
Background
The first study was designed to recreate the standard face-in-the-crowd effect for traditional fear-relevant face stimuli and to contrast it with that obtained with similar, non-face stimuli. 
Participants
The research participants were 20 students (7 females, 13 males) recruited at the University of Queensland. Participants were aged between 21 and 42 years (mean = 24.6 years) and had normal or corrected to normal eyesight. They were paid AUD$10 for their participation and agreed to participate in the experiment on an informed consent basis. 
Stimuli
In the first condition, targets consisted of schematic facial expressions, henceforth referred to as the ‘expression’ condition. The stimuli were similar to the angry and happy schematic faces used by Öhman et al. (2001). In the control or ‘abstract’ condition we used stimuli in which the lines were arranged either concentrically or radially, see Figure 2. These lines corresponded precisely to the eyebrows of the facial stimuli reflected along the horizontal axis of the circular outline. Testing of both types of stimuli was blocked and counterbalanced across participants. 
Figure 2
 
Abstract versions of the happy (left) and angry (right) faces.
Figure 2
 
Abstract versions of the happy (left) and angry (right) faces.
Results
Outliers in the reaction time data were defined as points lying ±3 SD from each individual participant's overall mean. Data from timed-out trials (≥5000 ms before a response was recorded), trials with outliers and trials with incorrect responses did not contribute to the reaction time analyses. Approximately 11% of the data were rejected under these criteria. 
A three-way ANOVA was conducted on the remaining reaction time data, with target presence, stimulus type, and emotional valence serving as repeated-measures factors. Robust main effects were found for all three factors. Overall, target-present trials elicited faster responses ( M = 1570.12, SEM = 93.73) than target-absent trials ( M = 1820.02, SEM = 98.23), F(1,19) = 27.43, p < .001, η p 2 = .59, responses to abstract stimuli ( M = 1612.04, SEM = 94.95) were faster than responses to schematic faces ( M = 1778.10, SEM = 99.16), F(1,19) = 8.89, p < .01, η p 2 = .32, and responses to ‘angry’ search targets ( M = 1586.92, SEM = 94.18) were faster than those to ‘happy’ targets ( M = 1803.22, SEM = 93.45), F(1,19) = 75.84, p < .001, η p 2 = .80. The non-significant two-way interactions of target presence × stimulus type ( F(1,19) = .455, p > .05, η p 2 = .02), and target presence × ‘emotional valence’ ( F(1,19) = 1.97, p > .05, η p 2 = .09), and the three-way interaction ( F(1,19) = 2.69, p > .05, η p 2 = .12), indicated very little variation in the overall pattern of results. 
The only qualification of lower-order effects was made by the two-way interaction of stimulus type and ‘emotional valence,’ ( F(1,19) = 6.85, p < .05, η p 2 = .26), which showed that, averaged over the two conditions of target presence, the ‘threat advantage’ for abstract stimuli (angry: M = 1469.3, SEM = 101.02. happy: M = 1704.55, SEM = 104.12) was approximately twice as great as that for schematic faces (angry: M = 1754.79, SEM = 10313. happy: M = 1851.65, SEM = 103.50). Complete results, including pairwise comparisons between the two levels of emotional valence, are displayed in Figure 3. The greater effects for abstract stimuli were somewhat surprising, but find converging evidence with the observations of Horstmann and Bauland (2006) and Horstmann, Borgstedt, and Heumann (2006) who noted that stronger threat advantages tended to occur with less complex stimuli. These results strongly suggest that the search advantage has little to do with emotional valence, because an effect of equal, or indeed greater, amplitude can be elicited by abstract stimuli sharing the basic structural features as schematic faces. 
Figure 3
 
Reaction time results from Experiment 1, revealing a clear advantage for angry faces over happy faces.
Figure 3
 
Reaction time results from Experiment 1, revealing a clear advantage for angry faces over happy faces.
We note that the discrepant angry target face showed a latency time of 1601 ms whereas the discrepant happy target face showed a latency time of 1766 ms. Öhman et al. (2001), reported time latencies of 1,172 ms and 1,300 ms respectively. In general, reaction times were of the order of 450 ms slower than in Öhman et al.'s studies. This is most likely due to the use of irregular grids, as well to the use of ‘happy’ distractors instead of neutral ones. 
Search accuracy was high overall. A 2 × 2 × 2 repeated measures ANOVA showed that the pattern of results generally mirrored those for Reaction Time, and there was no evidence of speed-accuracy tradeoffs. The results are summarized in Table 1. Accuracy was better for ‘angry’ stimuli overall, and this trend was observed for all combinations of target presence and stimulus type. The significant three-way interaction indicating that the ‘threat advantage’ was particularly pronounced for abstract stimuli with the search target present (see Table 1). The ‘emotional valence’ pairwise difference was not significant for schematic stimuli, although the trend was consistent with a threat advantage. 
Table 1
 
Results of three-way factorial ANOVA conducted on error rates, and mean error rates for the full factorial design.
Table 1
 
Results of three-way factorial ANOVA conducted on error rates, and mean error rates for the full factorial design.
Source SS df MS F p η p 2
Stimulus Type 0.001 1 0.001 0.531 .476 0.029
Error 0.036 18 0.002
Emotional Valence 0.052 1 0.052 13.388 .002 0.427
Error 0.069 18 0.004
Target Presence 0.153 1 0.153 34.356 1E-05 0.656
Error 0.080 18 0.004
Type * Valence 0.008 1 0.008 2.978 .102 0.142
Error 0.048 18 0.003
Type * Presence 4E-05 1 4E-05 0.014 .906 0.001
Error 0.052 18 0.003
Valence * Presence 0.003 1 0.003 4.493 .048 0.200
Error 0.014 18 0.001
Type * Valence * Presence 0.011 1 0.011 4.457 .049 0.198
Error 0.045 18 0.002

Absent Present
Abstract Schematic Abstract Schematic
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.910 0.885 0.906 0.876 0.872 0.794 0.836 0.822
SEM 0.051 0.051 0.051 0.052 0.051 0.050 0.049 0.050
Experiment 2
Background
A potential criticism of the first experiment is that the abstract versions of the faces are indeed abstract, but may still retain some level of basic similarity to their associated facial expression. In other words, the ‘X’ shape resembles an angry face due to the orientation and position of the lines. It is therefore conceivable that the cross shape is able to activate threat-specific mechanisms. This seems unlikely, because one would expect the overall ‘face-in-the-crowd’ effect to be smaller for the abstract faces, rather than larger, as it actually was. Nonetheless, this doubt prompted a second experiment. 
By rotating the internal components of the stimuli by 45 degrees it was possible to retain the general featural form of the original abstract stimuli, whilst destroying any correspondence to an upright facial configuration (see Figure 4—and appendix on valence testing of our stimuli). If the new stimuli still provided an advantage for the pattern with lines perpendicular to the circular edge over the pattern with lines parallel to the edge, this would undermine the criticism raised above. Note also that in the first experiment the within-subjects design meant that subjects were exposed to both faces and non-faces during testing. This may have encouraged them to form a face-like interpretation of the cross and diamond-shaped abstract stimuli. This problem was mitigated in the following experiment, which used only abstract stimuli. 
Figure 4
 
This stimuli used in Experiment 2, consisting of rotated versions of the happy (left) and angry (right) abstract stimuli used in Experiment 1.
Figure 4
 
This stimuli used in Experiment 2, consisting of rotated versions of the happy (left) and angry (right) abstract stimuli used in Experiment 1.
Participants
Participants were 15 new subjects (9 females, 6 males) recruited from the same population and treated identically as those in Experiment 1. Participants were aged between 18 and 33 years (mean = 26.5 years) and had normal or corrected to normal eyesight. The procedure differed from the previous experiment in only two ways; 1) the schematic face condition was replaced by abstract stimuli rotated by 45 degrees, 2) trials were presented in a fully randomized order over ten experimental blocks, rather than counterbalanced by stimulus type. 
Results
Approximately 7% of the data were rejected under the data exclusion criteria described in Experiment 1. The remaining data were subjected to a three-way repeated-measures ANOVA. Again, first-order effects explained most of the variability in reaction time, with only minor qualification by the three-way interaction. Target-present trials were responded to more quickly than target-absent trials, F(1,14) = 61.86, p < .001, η p 2 = .81, rotated stimuli resulted in faster RTs than upright stimuli, F(1,14) = 47.623, p < .001, η p 2 = .77, and ‘angry’ configurations resulted in faster RTs than ‘happy’ ones, F(1,14) = 111.30, p < .001, η p 2 = .89. Marginal means for the main effects are displayed in Table 2
Table 2
 
Marginal means for the three main effects.
Table 2
 
Marginal means for the three main effects.
Mean SEM
Target Presence Absent 1701.00 86.30
Present 1377.22 80.27
Stimulus Rotation Cardinal (rotated) 1444.61 72.66
Oblique (face-like) 1633.61 90.24
Emotional Valence Angry 1458.45 85.28
Happy 1619.77 76.73
The very small effect sizes for each of the two-way interactions again suggested that the pattern of results varied little across conditions (target presence × stimulus type, F(1,14) = .03, p > .05, η p 2 = .01; target presence × emotional valence, F(1,14) = 1.13, p > .05, η p 2 = .07; stimulus type × emotional valence, F(1,14) = .00, p > .05, η p 2 = .01). The three-way interaction ( F(1,14) = 4.48, p < .05, η p 2 = .26) indicated minor variation in the magnitude of the threat advantage according to target presence; a larger effect was found for stimuli of cardinal rotation when the search target was absent, whilst oblique stimuli resulted in a larger effect when the target was present (see Figure 5). However, this result should be interpreted in the context of the much more powerful first-order effects, particularly that of stimulus type. Cardinally oriented stimuli elicited much faster responses overall, so a conclusion that a larger ‘threat’ advantage occurs for obliquely oriented stimuli because they are more inherently ‘face-like’ is likely to be incorrect. The results are also at odds with another possible confound in the first experiment, namely that downward pointing “V” geometric configurations are related to threat (Aronoff, Barclay, & Stvenson, 1988; Larson, Aronoff, & Stearns, 2007), and that the results might therefore be due to latent threat signals in the abstract angry stimulus. See 1 for a more detailed study of this issue and stimulus valence in general. 
Figure 5
 
Reaction time results from Experiment 2, in which obliquely and cardinally oriented abstract stimuli were used.
Figure 5
 
Reaction time results from Experiment 2, in which obliquely and cardinally oriented abstract stimuli were used.
Accuracy was examined with a 2 × 2 × 2 repeated measures ANOVA. Again, the overall concordance of Reaction Time and Error Rate results indicated no speed-accuracy tradeoff. ANOVA results and cell means are presented in Table 3. Variance in accuracy was explained almost entirely by main effects; the significant two-way interaction between ‘emotional valence’ and target presence merely showed that the ‘anger’ superiority effect was greater for present search targets ( p = .001), than for absent search targets ( p = .027). 
Table 3
 
Results of three-way factorial ANOVA conducted on error rates, and cell means for the complete factorial design. Significant pairwise differences of emotional valence are shown in bold type.
Table 3
 
Results of three-way factorial ANOVA conducted on error rates, and cell means for the complete factorial design. Significant pairwise differences of emotional valence are shown in bold type.
Source SS df MS F p η p 2
Stimulus Type 0.007 1 0.007 8.684 .011 0.383
Error 0.011 14 0.001
Emotional Valence 0.028 1 0.028 21.689 3E-04 0.608
Error 0.018 14 0.001
Target Presence 0.082 1 0.082 24.390 2E-04 0.635
Error 0.047 14 0.003
Type * Valence 0.001 1 0.001 1.413 .254 0.092
Error 0.006 14 0.000
Type * Presence 0.001 1 0.001 0.638 .438 0.044
Error 0.012 14 0.001
Valence * Presence 0.008 1 0.008 5.985 .028 0.299
Error 0.019 14 0.001
Type * Valence * Presence 0.000 1 0.000 0.234 .636 0.016
Error 0.002 14 0.000

Absent Present
Rotated Facelike Rotated Facelike
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.986 0.977 0.981 0.961 0.955 0.912 0.939 0.889
SEM 0.004 0.009 0.005 0.009 0.012 0.016 0.015 0.020
Experiment 3
Background
As a final control, the third experiment sought to reproduce the effects of the previous experiment using curved lines generated from the mouth part of the facial stimuli. This is an important further control because mouths (in the absence of eyebrows) have been shown to be sufficient to obtain the face-in-the-crowd effect (e.g. Eastwood et al., 2001; Fox et al., 2000). One reason for thinking that the mouth parts alone might produce an effect, is that in an angry face the mouth and eyebrows share the property that the ends are arranged perpendicularly to the circular surround. In contrast, the eyebrows and mouth of happy faces are concentric with the circular surround. There is also precedence in the literature for obtaining search time differences using this type of stimulus (e.g. Horstmann, Borgstedt et al., 2006). The actual stimuli used appear in Figure 6
Figure 6
 
Stimuli using curved mouth-like elements in a standard orientation (top row) and rotated by 90 degrees (bottom row).
Figure 6
 
Stimuli using curved mouth-like elements in a standard orientation (top row) and rotated by 90 degrees (bottom row).
Participants
Participants were 15 new subjects recruited from the same population and treated identically to those in Experiments 1 and 2. Participants were aged between 17 and 24 years (mean 19.2 years) and had normal or corrected to normal vision. The equipment and the general procedures were the same as those used in Experiments 1 and 2
Results
Adopting the same criteria as in previous experiments 8% of the data were excluded, and a three-way repeated-measures ANOVA was conducted. Robust main effects were found for target presence, F(1,14) = 48.54, p < .001, η p 2 = .78, and the curvature of the internal features (‘valance’), F(1,14) = 83.01, p < .001, η p 2 = .86, but not stimulus type, F(1,14) = 1.68, p > .05, η p 2 = .11. Marginal means are displayed in Table 4
Table 4
 
Marginal means for the three main effects.
Table 4
 
Marginal means for the three main effects.
Mean SEM
Target Presence Absent 1701.00 86.30
Present 1377.22 80.27
Stimulus Rotation 90 deg (rotated) 1444.61 72.66
0 deg (face-like) 1633.61 90.24
Mouth part ‘valance’ Angry 1458.45 85.28
Happy 1619.77 76.73
As expected, target-present trials elicited faster responses than target absent trials, and an advantage was found for ‘angry’ configurations. There was no reliable overall effect attributable to rotation, but the two-way interactions involving this factor showed that the effect was dependent on both ‘valence,’ F(1,14) = 5.41, p < .05, η p 2 = .28 and target presence, F(1,14) = 7.50, p < .05, η p 2 = .35. The two-way interaction of ‘emotional valence’ and target presence was also robust, F(1,14) = 17.51, p < .001, η p 2 = .56, but the three-way interaction was not, F(1,14) = 1.77, p > .05, η p 2 = .11. The two-way interactions involving ‘valence’ were both ordinal; reaction times were consistently lower for ‘angry’ configurations, but there was some variation in the magnitude of the effect, and any interpretation attached to the target presence × rotation interaction has no bearing on the ‘threat advantage’ phenomenon. The complete results appear in Figure 7, along with pairwise comparisons for ‘valence.’ The striking similarity of these results to those of the previous experiment strongly suggests that stimulus configuration is responsible for the lower reaction times associated with angry emotional expressions. 
Figure 7
 
Reaction time results from Experiment 3.
Figure 7
 
Reaction time results from Experiment 3.
Again, an analysis on error rates (see Table 5) revealed no evidence of speed accuracy tradeoffs. Results were consistent with the reaction time data, with the exception of present search targets with a face-like orientation, where no advantage was found. However, performance was near ceiling in both of these conditions. 
Table 5
 
Three-way ANOVA results for error rates, and cell means for the significant three-way interaction. Significant pairwise differences for emotional valence are shown in bold type.
Table 5
 
Three-way ANOVA results for error rates, and cell means for the significant three-way interaction. Significant pairwise differences for emotional valence are shown in bold type.
Source SS df MS F p η p 2
Stimulus Type 0.012 1 0.012 4.369 .055 0.238
Error 0.038 14 0.003
Emotional Valence 0.022 1 0.022 9.711 .008 0.41
Error 0.032 14 0.002
Target Presence 0.097 1 0.097 20.84 4E-04 0.598
Error 0.065 14 0.005
Type * Valence 0.002 1 0.002 1.212 .290 0.08
Error 0.019 14 0.001
Type * Presence 0.018 1 0.018 12.941 .003 0.48
Error 0.02 14 0.001
Valence * Presence 8E-05 1 8-05 0.088 .771 0.006
Error 0.013 14 0.001
Type * Valence * Presence 0.009 1 0.009 8.201 .013 0.369
Error 0.015 14 0.001

Absent Present
Rotated Facelike Rotated Facelike
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.979 0.960 0.984 0.945 0.913 0.863 0.933 0.932
SEM 0.005 0.014 0.006 0.010 0.017 0.032 0.015 0.013
Discussion
This paper has considered the well-established speed advantage associated with searching for schematic images of angry faces compared with searching for neutral or happy ones—otherwise known as the face-in-the-crowd effect. In particular, the experiments were designed to test whether it is the emotional characteristics of schematic faces that are responsible for the search advantage. On the basis of three experiments we have established that stimuli derived from the faces, but which have no strong valence or emotional content, can produce effects equal to or greater than the effect obtained with the schematic faces. As a result we would argue that it is not the facial or emotional content of the faces that underlies the search advantage, but rather low-level features. Some authors claim to have discounted a low-level explanation for either real or schematic faces by using face inversion (planar rotation) as a control. However, as we describe in the Introduction, results from these types of experiments have proven controversial and unreliable. In our case, it is worth noting that the rotational symmetry of our abstract stimuli precludes the possibility of attenuating the search advantage through image inversion. For real stimuli there is an effect of inversion but it is to slightly attenuate the search advantage rather than abolish it (Calvo & Nummenmaa, 2008). It is a question worthy of future investigation whether non-symmetric, abstract stimuli would produce similar effects when inverted. 
One issue which this study has not tackled directly is the stage in the task at which the search advantage manifests itself. One possibility is that ‘angry’ stimuli alter the efficiency of pre or post stimulus analysis. Alternatively, they may alter the efficiency of the search process itself—i.e. the speed with which candidate targets are selected. One way of answering this question would be to alter set size. We have not manipulated set size ourselves, but have no reason to think that results would differ from those obtained using schematic faces, where, despite some early reports of parallel search for angry faces (e.g. Hansen & Hansen, 1988), work has generally reported equivalent slopes for angry and sad faces (Fox et al., 2000; Horstmann & Bauland, 2006). Results of this type appear to support the hypothesis that the major component of the search advantage comes from altering the efficiency of analyzing the stimuli, not in speeding the search phase per se. On the other hand, our target-absent results revealed that participants were quicker to verify the absence of an ‘angry’ stimulus in an array of ‘happy’ stimuli, than to verify absence of a ‘happy’ face among ‘angry’ ones. This difference seems to imply that the ‘angry’ stimuli have some capacity to grab attention which might be taken to imply ‘pop-out’. That would appear to predict shallower search slopes. One way of reconciling these results with the data from set-size experiments would be to regard the attention grabbing ability of ‘angry’ stimuli as lying in their resistance to disengagement of attention—rather than in grabbing attention in the first place (cf. Horstmann, Scharlau, & Ansorge, 2006). Ultimately, further testing will be required to answer this question fully. 
In general, our low-level explanation mirrors that of Purcell and Stewart (2002) who suggested that the orientation of the internal features might promote differences in detection speed. The crucial element of schematic face stimuli appears not to be their threat content, but rather the orientation of the internal features relative to the outside edge of the surround. It turns out that features oriented perpendicularly to the edge (and which therefore extend radially) are detected more quickly than those with features organized concentrically. The precise reason for the advantage of perpendicular over concentric edges is unclear. One possibility is that perpendicular lines are more ‘salient’. As we mentioned earlier, Calvo and Nummenmaa (2008) report that the Itti and Koch (2000) saliency model is capable of explaining expression detection in gray-level images—albeit happy rather than angry faces. We wondered if this result would carry over to the stimuli used in this paper, and ran the Itti & Koch simulations on our stimuli. In practice the model almost invariably found the concentric lines of the ‘happy’ stimuli more interesting than the radial lines of the ‘angry’ stimuli, which although consistent with Calvo & Nummenmaa's results, is the opposite of what one would have predicted from the behavioral studies reported here. In light of this we need a more sophisticated explanation for the search advantage seen for schematic stimuli. One possibility is that perpendicularly arranged edges form ‘T’ junctions with the surround. It is known that preattentive systems in humans are tuned to junctions of this type (Julesz, 1981) and that specialist detectors exist throughout visual cortex (e.g. Tanaka, Saito, Fukada, & Moriya, 1991). It is also the case that higher-order edge conjunctions (textons in Julesz's terminology) do not form part of the Itti & Koch saliency model. The suggestion that it is the ‘T’ junctions that matter is lent more credence by the fact that the face-in-the-crowd effect disappears if the face-like stimuli lack a facial surround (Purcell & Stewart, 2005; Schubö et al., 2006). 
In this study we have focussed on testing happy and angry facial expressions. Given our low-level explanation, it is instructive to consider what the effect of using intermediate versions of the stimuli might be—i.e. ‘angry’ eyes with a ‘happy’ mouth etc. It turns out that just this type of experiment has already been run. Öhman et al. (2001) and Tipples et al. (2002) included ‘scheming’ and ‘sad’ faces consisting of novel combinations of the eyebrows and mouths used in the traditional ‘angry’ and ‘happy’ configurations. In both cases the authors reported intermediate search times for the new emotions, and took this as evidence for a threat advantage spanning a range of emotions. In fact, the low-level mechanism described above can easily accommodate these findings. What both teams reported was that angry faces, with eyebrows and mouth tips perpendicular to the circular surround were detected most rapidly. Scheming faces, having the same eyebrows as the threatening ones, were second fastest. Sad faces, which have only the mouth as a perpendicularly aligned feature, came next, followed by happy faces. In other words, the hybrid forms produced intermediate level reaction times and their speed was related to the number and strength of the features aligned radially within the stimuli, just as the T-junction feature theory would predict. 
The fact that we are arguing against preattentive, unconscious and automatic processing of threatening faces should not be taken to mean that we discount the possibility that search is more efficient for some conjunctions or visual features. Indeed, our explanation for the rapid detection of T-junctions argues for some form of privileged processing or attention grabbing capacity for these types of basic shapes. So whilst our work primarily set out to discount emotional meaning as the source of enhanced search (e.g., Reynolds et al., 2008), it also discounts an explanation based on simple saliency (e.g., luminance, color, orientation, etc.). There are several consequences of such a finding. First, one might argue that T-junctions and the like have been selected through evolution because they are often associated with angry faces. In practice this seems unlikely given that real faces do not exhibit such extreme changes in radial alignment with changes in emotional state. Ultimately, it is configuration of the constituent lines that impacts on search times and not the emotional label attached to the overall stimulus (Cave & Batty, 2006). A second question which might arise from these findings is whether the search advantage found for T-junctions is due to their being processed via a separate, ‘privileged’ processing stream. We would argue that the current literature suggests not. A reading of the latest neurophysiological and anatomical literature points to the fact that neurons of the amygdala responsive to facial expression, for example, receive input from the temporal lobe, and that a separate pathway is unnecessary (Calder & Nummenmaa, 2007). Contrary to what one might think, restricting processing to the standard ventral cortical pathway need not imply that processing becomes unreasonably slow. Single cell recording from macaques and EEG studies in humans indicate that the ventral processing stream acts in a very rapid, feedforward manner and that it is capable of categorizing images in around 150 ms (Rolls, 1992; Thorpe, Fize, & Marlot, 1996), which is consistent with models of ventral stream dynamics (Wallis & Rolls, 1997). Beyond the temporal lobe, stimulus specific neurons should be able to categorize emotions with latencies of as little as 120–170 ms, as reported for cells sensitive to facial affect in prefrontal cortex (Kawasaki et al., 2001). Current thinking likewise suggests that beyond the most basic of reflex systems described in the Introduction, cortical areas are heavily involved in all directed avoidance movements such as blocking an approaching object or veering during locomotion (Cooke & Graziano, 2003; Schiff, 1965). 
Overall, the findings support the hypothesis that the anger superiority effect described for schematic faces is due to low-level stimulus features that comprise the face, and not emotional valence. Originally, schematic faces were introduced as a means of controlling for the saliency issues which plague real face stimuli. On the basis of the results described here we would argue that the new findings were unwittingly built on a separate source of artifact. In calling the schematic face literature into question, our results also have implications for results with real faces, since an important crutch, upon which that literature has leant in the past, has been removed. 
Appendix A
Emotional Ratings Data
Introduction
The figures that we have generated for use in these experiments were carefully designed to allow transforms from facial to non-facial stimuli with the minimum of change to the actual arrangement of the parts (eyebrows or mouth remain in the same location). Although this provides benefits in terms of cross stimulus detection performance it does raise the question of how our subjects perceived the stimuli, particularly in terms of their valence (perceived threat/mood). In earlier studies a great deal of work went into testing stimuli, not least in the work of Öhman and colleagues (e.g. Öhman et al., 2001). In this appendix we report details of two emotion rating studies aimed at assessing perceived affect in our stimuli. 
Methods
The first study focused on the schematic face stimuli used in Experiment 1 and compared them to those used by Öhman and colleagues in their studies. Thirteen participants who had not taken part in the original experiments were asked to rate four images depicting happy and angry emotions—two from Öhman's study and two from our own. They were asked to rate the faces using four separate 8-point Likert scales (Likert, 1932) (1 = Negative; 8 = Positive) which appeared below each facial image. In the second study twenty people, who once again had not taken part in the original experiments, were asked to rate the eight, non-face stimuli used in this study. The ratings were conducted separately to avoid carry-over of exposure to the faces to the non-face stimuli which might otherwise have influenced their interpretation of the images—in line with our original motivation to run two separate experiments. 
Results
The results of the first rating task using the face-like stimuli, are summarized in Table A1. A 2 × 2 repeated measures ANOVA was conducted. The results of this analysis appear in Table A2. Significant main effects were found for Stimulus Type and Emotion. Valence ratings fell very strongly in the expected direction, and with a tendency to rate the Öhman faces as more positive overall. The interaction did not attain significance, indicating that the effect of Emotion was similar for both Stimulus Types. 
Table A1
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A1
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A2
 
A comparison of the perceived valence of the face-like stimuli used in these studies with those of Öhman et al. (2001).
Table A2
 
A comparison of the perceived valence of the face-like stimuli used in these studies with those of Öhman et al. (2001).
Effect η p 2 F(1,12) p
Stimulus Type .426 8.907 .011
Emotion .897 104.161 2.8E-07
Stimulus Type * Emotion .126 1.728 .213
Data from the second study, looking at the abstract shapes, were analyzed using a three-way repeated-measures ANOVA with Stimulus Type, Rotation and Emotion as factors. For reasons of space and clarity, only the pair-wise comparisons of Emotion are presented. The only condition in which stimuli did not attract valence ratings indicating emotional content was with rotated lines. In other words the abstract stimuli did exhibit certain reliably different rating scores. Importantly though, the rotated line stimuli (crosses vs. squares) were rated the same, eliminating valence as a possible explanation for the results reported for those stimuli ( Tables A3 and A4). 
Table A3
 
A comparison of the mean perceived valence of the abstract stimuli used in these studies (standard deviations in parentheses). Valence was measured on an 8-point Likert scale (Likert, 1932).
Table A3
 
A comparison of the mean perceived valence of the abstract stimuli used in these studies (standard deviations in parentheses). Valence was measured on an 8-point Likert scale (Likert, 1932).
Table A4
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A4
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Stimuli Mean Difference (Emotion) t(19) p
Lines Face-like 1.904 3.865 1.04E-03
Rotated 0.714 1.465 0.159
Curves Face-like 4.380 16.420 1.11E-12
Rotated 2.095 5.543 2.40E-05
Discussion
The first rating study revealed that our ‘happy’ faces to have a slightly lower valence than the Öhman faces—i.e. that they were seen as slightly less happy and hence tended more closely towards neutrality. Despite this, they were perceived as having a much more positive valence than our ‘angry’ faces. This was in part due to the lower score (higher threat) values given to our angry face relative to the Öhman faces. It is interesting to note that Öhman et al. obtained the strongest results in their study not for happy versus angry faces, but rather for neutral versus angry faces. Given that our ‘happy’ face tended towards neutral, a valence account would predict a stronger effect with our happy stimuli than Öhman reported for his happy faces. This is important as it establishes a high baseline effect to which our more abstract stimuli are being compared. Results from the second study revealed an interesting carry-over of perceived threat into the abstract stimuli. Importantly, the ‘X-shaped’ stimulus scored significantly lower than the ‘diamond’. This would leave scope for the valence account of detection advantage to apply to our abstract ‘cross’ and ‘diamond’ stimuli. The perceived emotional content of these abstract images is possibly in part due to the residual face-like appearance of the cross shape. Of course we were aware of this possibility, and this was the motivation behind rotating the stimuli in Experiment 2. The ‘plus’ and ‘square’ stimuli showed no difference in valence, and so the angry-superiority/valence account of speeded detection cannot apply in this case. The pattern of results was similar for curved-line stimuli, although the rotated versions attracted stronger valence ratings and the concave (angry) abstract stimuli were rated as more threatening that the convex (happy) stimuli. This result mirrors results obtained by Horstmann, Borgstedt et al. (2006). The fact that rotating the curved stimuli through 90 degrees reduced, but did not remove, the valence difference, makes it important to interpret the difference in search times in Experiment 3 with caution, as both a valence and a feature-based (though not a facial expression) explanation could be adopted in this case. That said, the differences in valance for the rotated stimuli were half the size of those for the unrotated stimuli, and yet the sizes of the search advantage for the rotated and unrotated stimuli were indistinguishable. 
Acknowledgments
We thank P. Albuquerque; D.G. Purcell and O. Lipp for discussion and advice. This work was supported by grants from the Portuguese Foundation for Science and Technology (ref. SFRH/BPD/26922/2006) and the Australian Research Council (DP0343522). 
Commercial relationships: none. 
Corresponding author: Dr. Guy Wallis. 
Email: gwallis@hms.uq.edu.au. 
Address: Perception and Motor Control Lab, School of Human Movement Studies, University of Queensland, QLD 4072, Australia. 
References
Aronoff J. Barclay A. M. Stvenson L. A. (1988). The recognition of threatening facial stimuli. Journal of Personality and Social Psychology, 54, 647–644. [PubMed] [CrossRef] [PubMed]
Ashwin C. Wheelwright S. Baron-Cohen S. (2006). Finding a face-in-the-crowd: Testing the anger superiority effect in Asperger Syndrome. Brain and Cognition, 61, 78–95. [PubMed] [CrossRef] [PubMed]
Biederman I. (1988). Recognition by components: A theory of human image understanding. Psychological Review, 94, 115–147. [CrossRef]
Biederman I. Mezzanotte R. J. Rabinowitz J. C. (1982). Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14, 143–177. [CrossRef] [PubMed]
Calder A. J. Nummenmaa L. (2007). Face cells: Separate processing of expression and gaze in the amygdala. Current Biology, 17, R371–R372. [PubMed] [CrossRef] [PubMed]
Calvo M. G. Nummenmaa L. (2008). Detection of emotional faces: Salient physical features guide effective visual search. Journal of Experimental Psychology: General, 137, 471–494. [PubMed] [CrossRef] [PubMed]
Cave K. R. Batty M. J. (2006). From searching for features to searching for threat: Drawing the boundary between preattentive and attentive vision. Visual Cognition, 14, 629–646. [CrossRef]
Cooke D. F. Graziano M. S. A. (2003). Defensive movements evoked by air puff in monkeys. Journal of Neurophysiology, 90, 3317–3329. [PubMed] [Article] [CrossRef] [PubMed]
Eastwood J. D. Smilek D. Merikle P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63, 1004–1013. [PubMed] [CrossRef] [PubMed]
Ekman P. Friesen W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.
Esteves F. Dimberg U. Öhman A. (1994). Automatically elicited fear: Conditioned skin conductance responses to masked facial expressions. Cognition and Emotion, 8, 393–413. [CrossRef]
Farah M. J. (1990). Selective impairments of object representation in Prosopagnosia. Journal of Clinical and Experimental Neuropsychology, 12, 32.
Fox E. Lester V. Russo R. Bowles R. J. Pichler A. Dutton K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition and Emotion, 14, 61–92. [PubMed] [Article] [CrossRef] [PubMed]
Hansen C. Hansen R. (1988). Finding the face-in-the-crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54, 917–924. [PubMed] [CrossRef] [PubMed]
Horstmann G. (2007). Preattentive face processing: What do visual search experiments with schematic faces tell us? Visual Cognition, 15, 799–833. [CrossRef]
Horstmann G. Bauland A. (2006). Search asymmetries with real faces: Testing the anger superiority effect. Emotion, 6, 193–207. [PubMed] [CrossRef] [PubMed]
Horstmann G. Borgstedt K. Heumann M. (2006). Flanker effects with faces may depend on perceptual as well as emotional differences. Emotion, 6, 28–39. [PubMed] [CrossRef] [PubMed]
Horstmann G. Scharlau I. Ansorge U. (2006). More efficient rejection of happy than of angry face distractors in visual search. Psychonomic Bulletin & Review, 13, 1067–1073. [PubMed] [Article] [CrossRef] [PubMed]
Itti L. Koch C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. [PubMed] [CrossRef] [PubMed]
Julesz B. (1981). Nature,. [.
Kanwisher N. McDermott J. Chun M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. [PubMed] [Article] [PubMed]
Kawasaki H. Adolphs R. Kaufman O. Damasio H. Damasio A. R. Granner M. (2001). Single-neuron responses to emotional visual stimuli recorded in human ventral prefrontal cortex. Nature Neuroscience, 4, 15–16. [PubMed] [CrossRef] [PubMed]
Larson C. L. Aronoff J. Stearns J. J. (2007). The shape of threat: Simple geometric forms evoke rapid and sustained capture of attention. Emotion, 7, 526–534. [PubMed] [CrossRef] [PubMed]
LeDoux J. (1996). The emotional brain. The mysterious underpinnings of emotional life. New York: Simon & Shuster.
LeDoux J. (2003). The emotional brain, fear, and the amygdala. Cellular and Molecular Neurobiology, 23, 727–738. [PubMed] [CrossRef] [PubMed]
Likert R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 1–55.
Mather M. Knight M. R. (2006). Angry faces get noticed quickly: Threat detection is not impaired among older adults. Journals of Gerontology B: Psychological Sciences and Social Sciences, 61, 54–57. [PubMed] [CrossRef]
Niedenthal P. M. (1990). Implicit perception of affective information. Journal of Experimental Social Psychology, 26, 505–527. [CrossRef]
Nothdurft H. C. (1993). Faces and facial expressions do not pop out. Perception, 22, 1287–1298. [PubMed] [CrossRef] [PubMed]
Öhman A. (2005). The role of the amygdala in human fear: Automatic detection of threat. Psychoneuroendocrinology, 30, 953–958. [PubMed] [CrossRef] [PubMed]
Öhman A. Lundqvist A. Esteves F. (2001). The face-in-the-crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80, 381–396. [PubMed] [CrossRef] [PubMed]
Öhman A. Mineka S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychological Review, 108, 483–522. [PubMed] [CrossRef] [PubMed]
Purcell D. G. Stewart A. L. (2002). The face-in-the-crowd: Yet another confound..
Purcell D. G. Stewart A. L. (2005). Anger superiority: Effects of facial surround and similarity of targets and distractors..
Purcell D. G. Stewart A. L. Skov R. (1996). It takes a confounded face to pop out of a crowd. Perception, 25, 1091–1108. [PubMed] [CrossRef] [PubMed]
Reynolds M. G. Eastwood J. D. Partanan M. Frischen A. Smilek D. (2008). Monitoring eye movements while searching for affective faces. Visual Cognition, 17, 318–333. [CrossRef]
Rolls E. T. (1992). Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areas. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 335, 11–21. [PubMed] [CrossRef]
Schiff W. (1965). Perception of impending collision: A study of visually directed avoidant behavior. Psychological Monograph General Applied, 79, 1–26. [PubMed] [CrossRef]
Schubö A. Gendolla G. H. E. Meinecke C. Abele A. E. (2006). Detecting emotional faces and features in a visual search paradigm: Are faces special? Emotion, 6, 246–256. [PubMed] [CrossRef] [PubMed]
Tanaka K. Saito H. Fukada Y. Moriya M. (1991). Coding visual images of objects in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, 66, 170–189. [PubMed] [PubMed]
Thompson P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483–484. [PubMed] [CrossRef] [PubMed]
Thorpe S. Fize D. Marlot C. (1996). Speed of processing in the human visual system. Nature, 381, 520–522. [PubMed] [CrossRef] [PubMed]
Tipples J. Atkinson A. P. Young A. W. (2002). The eyebrow frown: A salient social signal. Emotion, 2, 288–296. [PubMed] [CrossRef] [PubMed]
Treisman A. (1986). Features and objects in visual processing. Scientific American, 255, 114–125. [CrossRef]
Tsao D. Y. Freiwald W. A. Tootell R. B. H. Livingstone M. S. (2006). A cortical region consisting entirely of face-selective cells. Science, 311, 670–674. [PubMed] [Article] [CrossRef] [PubMed]
Wallis G. Rolls E. T. (1997). Invariant face and object recognition in the visual system. Progress in Neurobiology, 51, 167–194. [PubMed] [CrossRef] [PubMed]
White M. J. (1995). Preattentive analysis of facial expressions of emotion. Cognition and Emotion, 9, 439–460. [CrossRef]
Williams M. A. Mattingley J. B. (2006). Do angry men get noticed? Current Biology, 16, R402–R404. [PubMed] [CrossRef] [PubMed]
Yeomans J. S. Li L. Scott B. W. Frankland P. W. (2002). Tactile, acoustic and vestibular systems sum to elicit the startle reflex. Neuroscience Biobehavior Reviews, 26, 1–11. [PubMed] [CrossRef]
Figure 1
 
Example of the staggered 3 × 3 stimulus matrix used in all experiments. In this case a target-present trial from Experiment 1 is shown.
Figure 1
 
Example of the staggered 3 × 3 stimulus matrix used in all experiments. In this case a target-present trial from Experiment 1 is shown.
Figure 2
 
Abstract versions of the happy (left) and angry (right) faces.
Figure 2
 
Abstract versions of the happy (left) and angry (right) faces.
Figure 3
 
Reaction time results from Experiment 1, revealing a clear advantage for angry faces over happy faces.
Figure 3
 
Reaction time results from Experiment 1, revealing a clear advantage for angry faces over happy faces.
Figure 4
 
This stimuli used in Experiment 2, consisting of rotated versions of the happy (left) and angry (right) abstract stimuli used in Experiment 1.
Figure 4
 
This stimuli used in Experiment 2, consisting of rotated versions of the happy (left) and angry (right) abstract stimuli used in Experiment 1.
Figure 5
 
Reaction time results from Experiment 2, in which obliquely and cardinally oriented abstract stimuli were used.
Figure 5
 
Reaction time results from Experiment 2, in which obliquely and cardinally oriented abstract stimuli were used.
Figure 6
 
Stimuli using curved mouth-like elements in a standard orientation (top row) and rotated by 90 degrees (bottom row).
Figure 6
 
Stimuli using curved mouth-like elements in a standard orientation (top row) and rotated by 90 degrees (bottom row).
Figure 7
 
Reaction time results from Experiment 3.
Figure 7
 
Reaction time results from Experiment 3.
Table A1
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A1
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A3
 
A comparison of the mean perceived valence of the abstract stimuli used in these studies (standard deviations in parentheses). Valence was measured on an 8-point Likert scale (Likert, 1932).
Table A3
 
A comparison of the mean perceived valence of the abstract stimuli used in these studies (standard deviations in parentheses). Valence was measured on an 8-point Likert scale (Likert, 1932).
Table 1
 
Results of three-way factorial ANOVA conducted on error rates, and mean error rates for the full factorial design.
Table 1
 
Results of three-way factorial ANOVA conducted on error rates, and mean error rates for the full factorial design.
Source SS df MS F p η p 2
Stimulus Type 0.001 1 0.001 0.531 .476 0.029
Error 0.036 18 0.002
Emotional Valence 0.052 1 0.052 13.388 .002 0.427
Error 0.069 18 0.004
Target Presence 0.153 1 0.153 34.356 1E-05 0.656
Error 0.080 18 0.004
Type * Valence 0.008 1 0.008 2.978 .102 0.142
Error 0.048 18 0.003
Type * Presence 4E-05 1 4E-05 0.014 .906 0.001
Error 0.052 18 0.003
Valence * Presence 0.003 1 0.003 4.493 .048 0.200
Error 0.014 18 0.001
Type * Valence * Presence 0.011 1 0.011 4.457 .049 0.198
Error 0.045 18 0.002

Absent Present
Abstract Schematic Abstract Schematic
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.910 0.885 0.906 0.876 0.872 0.794 0.836 0.822
SEM 0.051 0.051 0.051 0.052 0.051 0.050 0.049 0.050
Table 2
 
Marginal means for the three main effects.
Table 2
 
Marginal means for the three main effects.
Mean SEM
Target Presence Absent 1701.00 86.30
Present 1377.22 80.27
Stimulus Rotation Cardinal (rotated) 1444.61 72.66
Oblique (face-like) 1633.61 90.24
Emotional Valence Angry 1458.45 85.28
Happy 1619.77 76.73
Table 3
 
Results of three-way factorial ANOVA conducted on error rates, and cell means for the complete factorial design. Significant pairwise differences of emotional valence are shown in bold type.
Table 3
 
Results of three-way factorial ANOVA conducted on error rates, and cell means for the complete factorial design. Significant pairwise differences of emotional valence are shown in bold type.
Source SS df MS F p η p 2
Stimulus Type 0.007 1 0.007 8.684 .011 0.383
Error 0.011 14 0.001
Emotional Valence 0.028 1 0.028 21.689 3E-04 0.608
Error 0.018 14 0.001
Target Presence 0.082 1 0.082 24.390 2E-04 0.635
Error 0.047 14 0.003
Type * Valence 0.001 1 0.001 1.413 .254 0.092
Error 0.006 14 0.000
Type * Presence 0.001 1 0.001 0.638 .438 0.044
Error 0.012 14 0.001
Valence * Presence 0.008 1 0.008 5.985 .028 0.299
Error 0.019 14 0.001
Type * Valence * Presence 0.000 1 0.000 0.234 .636 0.016
Error 0.002 14 0.000

Absent Present
Rotated Facelike Rotated Facelike
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.986 0.977 0.981 0.961 0.955 0.912 0.939 0.889
SEM 0.004 0.009 0.005 0.009 0.012 0.016 0.015 0.020
Table 4
 
Marginal means for the three main effects.
Table 4
 
Marginal means for the three main effects.
Mean SEM
Target Presence Absent 1701.00 86.30
Present 1377.22 80.27
Stimulus Rotation 90 deg (rotated) 1444.61 72.66
0 deg (face-like) 1633.61 90.24
Mouth part ‘valance’ Angry 1458.45 85.28
Happy 1619.77 76.73
Table 5
 
Three-way ANOVA results for error rates, and cell means for the significant three-way interaction. Significant pairwise differences for emotional valence are shown in bold type.
Table 5
 
Three-way ANOVA results for error rates, and cell means for the significant three-way interaction. Significant pairwise differences for emotional valence are shown in bold type.
Source SS df MS F p η p 2
Stimulus Type 0.012 1 0.012 4.369 .055 0.238
Error 0.038 14 0.003
Emotional Valence 0.022 1 0.022 9.711 .008 0.41
Error 0.032 14 0.002
Target Presence 0.097 1 0.097 20.84 4E-04 0.598
Error 0.065 14 0.005
Type * Valence 0.002 1 0.002 1.212 .290 0.08
Error 0.019 14 0.001
Type * Presence 0.018 1 0.018 12.941 .003 0.48
Error 0.02 14 0.001
Valence * Presence 8E-05 1 8-05 0.088 .771 0.006
Error 0.013 14 0.001
Type * Valence * Presence 0.009 1 0.009 8.201 .013 0.369
Error 0.015 14 0.001

Absent Present
Rotated Facelike Rotated Facelike
Angry Happy Angry Happy Angry Happy Angry Happy
Mean 0.979 0.960 0.984 0.945 0.913 0.863 0.933 0.932
SEM 0.005 0.014 0.006 0.010 0.017 0.032 0.015 0.013
Table A2
 
A comparison of the perceived valence of the face-like stimuli used in these studies with those of Öhman et al. (2001).
Table A2
 
A comparison of the perceived valence of the face-like stimuli used in these studies with those of Öhman et al. (2001).
Effect η p 2 F(1,12) p
Stimulus Type .426 8.907 .011
Emotion .897 104.161 2.8E-07
Stimulus Type * Emotion .126 1.728 .213
Table A4
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Table A4
 
A comparison of the perceived valence of the abstract stimuli used in these studies.
Stimuli Mean Difference (Emotion) t(19) p
Lines Face-like 1.904 3.865 1.04E-03
Rotated 0.714 1.465 0.159
Curves Face-like 4.380 16.420 1.11E-12
Rotated 2.095 5.543 2.40E-05
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×