November 2011
Volume 11, Issue 13
Free
Article  |   November 2011
The resolution of facial expressions of emotion
Author Affiliations
Journal of Vision November 2011, Vol.11, 24. doi:10.1167/11.13.24
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Shichuan Du, Aleix M. Martinez; The resolution of facial expressions of emotion. Journal of Vision 2011;11(13):24. doi: 10.1167/11.13.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.

Introduction
Emotions are fundamental in studies of cognitive science (Damassio, 1995), neuroscience (LeDoux, 2000), social psychology (Adolphs, 2003), sociology (Massey, 2002), economics (Connolly & Zeelenberg, 2002), human evolution (Schmidt & Cohn, 2001), and engineering and computer science (Pentland, 2000). Emotional states and emotional analysis are known to influence or mediate behavior and cognitive processing. Many of these emotional processes may be hidden to an outside observer, whereas others are visible through facial expressions of emotion. 
Facial expressions of emotion are a consequence of the movement of the muscles underneath the skin of our face (Duchenne, 1862/1990). The movement of these muscles causes the skin of the face to deform in ways that an external observer can use to interpret the emotion of that person. Each muscle employed to create these facial constructs is referred to as an Action Unit (AU). Ekman and Friesen (1978) identified those AUs responsible for generating the emotions most commonly seen in the majority of cultures—anger, sadness, fear, surprise, happiness, and disgust. For example, happiness generally involves an upper–backward movement of the mouth corners; while the mouth is upturned (to produce the smile), the cheeks lift and the upper corner of the eyes wrinkle. This is known as the Duchenne (1862/1990) smile. It requires the activation of two facial muscles: the zygomatic major (AU 12) to raise the corners of the mouth and the orbicularis oculi (AU 42) to uplift the cheeks and form the eye corner wrinkles. The muscles and mechanisms used to produce the abovementioned facial expressions of emotion are now quite well understood and it has been shown that the AUs used in each expression are relatively consistent from person to person and among distinct cultures (Burrows & Cohn, 2009). 
Yet, as much as we understand the generative process of facial expressions of emotion, much still needs to be learned about their interpretation by our cognitive system. Thus, an important open problem is to define the computational (cognitive) space of facial expressions of emotion of the human visual system. In the present paper, we study the limits of this visual processing of facial expressions of emotion and what it tells us about how emotions are represented and recognized by our visual system. Note that the term “computational space” is used here to specify the combination of features (dimensions) used by the cognitive system to determine (i.e., analyze and classify) the appropriate label for each facial expression of emotion. 
To properly address the problem stated in the preceding paragraph, it is worth recalling that some facial expressions of emotion may have evolved to enhance or reduce our sensory inputs (Susskind et al., 2008). For example, fear is associated with a facial expression with open mouth, nostrils, and eyes and an inhalation of air, as if to enhance the perception of our environment, while the expression of disgust closes these channels (Chapman, Kim, Susskind, & Anderson, 2009). Other emotions, though, may have evolved for communication purposes (Schmidt & Cohn, 2001). Under this assumption, the evolution of this capacity to express emotions had to be accompanied by the ability to interpret them visually. These two processes (production and recognition) would have had to coevolve. That is, if the intention of some facial expressions of emotion were to convey this information to observers, they would have had to coevolve with the visual processes in order to maximize transmission through a noisy medium. By coevolve, we mean that they both changed over time—one influencing the other. 
The above arguments raise an important question. What is the resolution at which humans can successfully recognize facial expressions of emotion? Some evidence suggests that we are relatively good at recognition from various resolutions (Harmon & Julesz, 1973) and that different stimuli are better interpreted from various distances (Gold, Bennett, & Sekuler, 1999; Parish & Sperling, 1991), but little is known on how far we can go before our facial expressions can no longer be read. This question is fundamental to understand how humans process facial expressions of emotion. First, the resolution of the stimuli can tell us which features are lost when recognition is impaired. Second, the confusion table (which specifies how labels are confused with one another) of different resolutions will determine if the confusion patterns change with resolution and what this tells us about the cognitive space of facial expressions. Third, this information will help us determine if facial expressions of emotions did indeed coevolve to communicate certain emotions and from what range of resolutions. 
Smith and Schyns (2009) provide a detailed study on the role of low frequencies for the recognition of distal expressions of emotion. Using a computational model and psychophysics, they show that happiness and surprise use several low-frequency bands and are, thus, the two expressions that are best recognized from a distance. They argue that these two expressions could have had an evolutionary advantage when recognized from a distance, while other emotions were mostly employed for proximal interactions. However, Laprevote, Oliva, Delerue, Thomas, and Boucart (2010) have recently reported results suggesting that both high and low frequencies are important for recognition of joy and anger, with a slight preference for the high frequencies. Thus, the questions listed above remain unanswered. 
In the present study, we do not manipulate the frequency spectrum of the image directly. Rather, we start with a stimuli of 240 × 160 pixels and create four additional sets of images at different resolutions—each 1/2 the resolution of its preceding set. This simulates what happens when a person (i.e., sender) moves away from the observer. It also allows us to determine which is the minimum resolution needed for recognition and how identification and confusions change with the number of pixels. The images of the six emotions described above plus neutral are then resized back to the original resolution for visualization (Figure 1). The neutral expression is defined as having all facial muscles at rest (except for the eyelids, which can be open) and, hence, with the intention of not expressing any emotion. All images are shown as stimulus of 5.3 by 8 degrees of visual angle and avoid possible changes due to image size (Majaj, Pelli, Kurshan, & Palomares, 2002). 
Figure 1
 
Facial expressions, from left to right: happiness, sadness, fear, anger, surprise, disgust, and neutral. Resolutions from top to bottom: 1 (240 × 160 pixels), 1 2 (120 × 80 pixels), 1 4 (60 × 40 pixels), 1 8 (30 × 20 pixels), and 1 16 (15 × 10 pixels).
Figure 1
 
Facial expressions, from left to right: happiness, sadness, fear, anger, surprise, disgust, and neutral. Resolutions from top to bottom: 1 (240 × 160 pixels), 1 2 (120 × 80 pixels), 1 4 (60 × 40 pixels), 1 8 (30 × 20 pixels), and 1 16 (15 × 10 pixels).
A seven-alternative forced-choice (7AFC) task shows that every expression is recognized within a wide range of image resolutions (Figure 2). The main difference is that some expressions are recognized more poorly at all resolutions, while others are consistently easier (Figure 3). For example, fear and disgust are poorly recognized at every resolution, while happiness and surprise (as well as neutral) are easily identified in that same resolution range. Recognition remains quite consistent until the image is reduced to 15 × 10 pixels, where almost no useful information is left for analysis. Sadness and anger are not as easily classified as happiness and surprise but are more successfully identified than fear and disgust. 
Figure 2
 
Stimulus timeline. A white fixation cross in black background is shown for 500 ms. Then, a stimulus image is shown for 500 ms, followed by a random noise mask for 750 ms. A 7AFC task is used. After the subject's response, the screen goes blank for 500 ms and the process is repeated.
Figure 2
 
Stimulus timeline. A white fixation cross in black background is shown for 500 ms. Then, a stimulus image is shown for 500 ms, followed by a random noise mask for 750 ms. A 7AFC task is used. After the subject's response, the screen goes blank for 500 ms and the process is repeated.
Figure 3
 
Recognition rates of the seven facial expressions as a function of image resolution. The horizontal axis defines the resolution and the vertical axis defines the recognition rate. For each emotion, solid lines connect the two points that are not statistically different and dashed lines connect points that are statistically different. The horizontal dash-dotted line indicates chance level, at ∼14%.
Figure 3
 
Recognition rates of the seven facial expressions as a function of image resolution. The horizontal axis defines the resolution and the vertical axis defines the recognition rate. For each emotion, solid lines connect the two points that are not statistically different and dashed lines connect points that are statistically different. The horizontal dash-dotted line indicates chance level, at ∼14%.
Our results suggest that the computational space used to classify each emotion is robust to a wide range of image resolutions. That is, the cognitive space is defined to achieve a constant recognition for a variety of image resolutions (distances). We also show that women are significantly better at recognizing every expression at all resolutions and that their confusion of one emotion for another is less marked than those seen in men. 
Importantly, the confusion tables, illustrating which emotions are mistaken for others, are shown to be asymmetric. For example, fear is typically confused for surprise but not vice versa. We show that this asymmetry cannot be explained if subjects were analyzing AUs, suggesting that the dimensions of the computational space are formed by features other than AUs or AU coding. We conclude with a discussion of how the reported results challenge existing computational models of face perception. 
Experiment
We hypothesize that facial expressions of emotion are correctly recognized at a variety of image resolutions. To test this hypothesis, we develop a set of images of the six emotions listed above plus neutral at various resolutions. 
Methods
Subjects
Thirty-four human subjects with normal or corrected-to-normal vision were drawn from the population of students and staff at The Ohio State University (mean age = 23, standard deviation = 3.84). They were seated in front of a personal computer with a 21″ CRT monitor. The distance between the eye and the monitor screen was approximately 50 cm. Distance was controlled and subjects were instructed not to move forward or backward during the experiment. The standard deviation from the mean distance (50 cm) was below 2 cm. 
Stimuli
One hundred and five grayscale face images were used, consisting of six facial expressions of emotion (happiness, surprise, anger, sadness, fear, and disgust) plus neutral from a total of 15 people. These images were selected from two facial expression databases: the Pictures of Facial Affect (PoFA) of Ekman and Friesen (1976) and the Cohn–Kanade database (CK; Kanade, Cohn, & Tian, 2000). The former provided 70 images and the latter provided 35 images. Images were normalized to the same overall intensity and contrast. 
All images were cropped around the face and downsized to 240 × 160 pixels. The images at this resolution are referred to as resolution 1. Subsequent sets were constructed by downsizing the previous set by 1/2. This procedure yielded the following additional sets: 120 by 80 (called resolution
1 2
), 60 by 40 (resolution
1 4
), 30 by 20 (resolution
1 8
), and 15 by 10 pixels (resolution
1 16
). All images were downsized using linear averaging over neighboring of 2 by 2 pixels. 
To provide a common visual angles of 5.3° horizontally and 8° vertically, all five sizes were scaled back to 240 × 160 pixels using bilinear interpolation, which preserves most of the spatial frequency components (Figure 1). Images from the same column in Figure 1 were not presented in the same trial to prevent subjects from judging one image based on having previously seen the same image at a larger resolution. Thus, each experiment was composed of 105 images consisting of 7 facial expressions of 15 identities. The 5 resolutions were evenly distributed. The resolution–identity correspondence was randomly generated for each trial. 
Design and procedure
The experiment began with a short introductory session where subjects were shown face images of the seven facial expressions and were told the emotion of each image. A short practice session followed, consisting of 14 trials. The images of the subjects used in this practice session were not used in the actual test. 
The test session followed. A white fixation cross in black background was shown for 500 ms prior to the stimulus whose display duration was also 500 ms, followed by a random noise mask shown for 750 ms. A 7AFC task was used, where subjects had to select one of the six emotion labels or neutral. After the subject’s response, the screen went black for 500 ms before starting the process again. Figure 2 illustrates a typical stimulus timeline. The entire experiment lasted about 10 min with no breaks. The trials with reaction times larger than two standard deviations from the average were discarded. This left approximately 95 to 100 trials per condition for analysis. 
Results
Table 1 shows the confusion matrices, with columns defining the true emotion shown and rows defining subjects’ responses. Entries with an asterisk indicate that the results are statistically significant (p ≤ 0.05) from noise. The relationship between image resolution and perception was examined to address how recognition and error rates changed with image detail reduction. 
Table 1
 
Confusion matrices. The leftmost column is the response (perception) and the first row of each matrix specifies the emotion class of the stimulus. The diagonal elements are the recognition rates and the off-diagonal entries correspond to the error rates. Resolutions from top to bottom: 1, 1 2 , 1 4 , 1 8 , and 1 16 . The chance level is 14%. An asterisk highlights the entries that are statistically different from noise. A grayscale color palette of 10 scales was used to color code the percentages from 0 (light) to 1 (dark).
Table 1
 
Confusion matrices. The leftmost column is the response (perception) and the first row of each matrix specifies the emotion class of the stimulus. The diagonal elements are the recognition rates and the off-diagonal entries correspond to the error rates. Resolutions from top to bottom: 1, 1 2 , 1 4 , 1 8 , and 1 16 . The chance level is 14%. An asterisk highlights the entries that are statistically different from noise. A grayscale color palette of 10 scales was used to color code the percentages from 0 (light) to 1 (dark).
Recognition rates
It is observed from the confusion matrices that some resolution reductions affect recognition rates while others do not. To further study this, the test of equality of proportions was applied to the average recognition rates of each facial expression. Figure 3 shows the recognition rates and the statistical test results. The continuous lines indicate that there was no statistical difference between the results of the two resolutions connected by the lines, while the dashed lines indicate the opposite. 
There were no significant recognition loss at resolutions 1 to
1 4
for all emotions but anger. Sadness, disgust, and neutral showed a decrease at resolution
1 8
. Without exceptions, significant degradation of perception occurred at
1 16
. In addition, perception of sadness, fear, anger, and disgust dropped to chance level. One concern was whether the decrease of perception was linear with respect to size reduction. This was tested using the log-linear model of Poisson regression r = β × resolution + γ, where r is the recognition rate, β is the coefficient, and γ is the intercept. The values used for resolution are 1,
1 2 2
,
1 4 2
,
1 8 2
, and
1 16 2
because the ratios among these numbers are equal to the ratios among the actual numbers of pixels in the five resolutions. Thus, this model evaluates the linearity of recognition rates given the quantity of pixels. The null hypothesis β = 0 was rejected, p ≥ 0.73. Therefore, recognition rates did not decrease linearly with image resolution. 
Next, we tested a logarithmical fit, given by r = α × log(resolution) + γ, where r is the recognition rate, α is the coefficient, and γ is the intercept. In this case, the null hypothesis α = 0 is not rejected for happiness (p = 0.16), fear (p = 0.12), anger (p = 0.07), and surprise (p = 0.15). The null hypothesis is, however, rejected for sadness (p = 0.04) and disgust (p = 0.01). These results show that the recognition of emotions is not seriously impaired until after size 1/8 for four out of the six emotions studied (happy, fear, anger, surprise). 
Error rates: Confusions
The error rates are a measure of perceptual confusion between facial expressions. It is clear from these results that at resolution 1/16, recognition is no longer possible. For this reason, in this section, we study the confusion patterns seen in resolutions 1 to 1/8. 
The clearest misclassification is that of disgust for anger. At resolution 1, images of disgust are classified as angry 42% of the time by human subjects. This pattern remains clear at the other resolutions. In fact, at resolutions 1/4 and 1/8, disgust is classified as anger more often that it is to disgust. Most interestingly, anger is rarely confused for disgust. This asymmetry in the confusion table is not atypical. To enumerate another example, fear is consistently confused for disgust and surprise but not vice versa. 
Not surprisingly, happy and surprise are the only two expressions that are never (consistently) confused for other expressions, regardless of the resolution. These two expressions are commonly used in communication and is, thus, not surprising that they can be readily recognized at different resolutions. 
Sadness and anger are well recognized at close proximity, but they get confused by other expressions as the distance between the sender and the receiver increases. Sadness is most often confused for neutral (i.e., the absence of emotion), while anger is confused for sadness, disgust, and, to a lesser degree, neutral. 
It may be possible to learn to distinguish some expressions better over time, or it could be that evolution equipped one of the genders with better recognition capabilities as suggested by some authors (Gitter, Black, & Mostofsky, 1972; Rotter & Rotter, 1988). To test this hypothesis, we plotted the confusion patterns for men and women in two separate tables (Tables 2 and 3). 
Table 2
 
Confusion matrices of 14 female subjects. Same notation as in Table 1.
Table 2
 
Confusion matrices of 14 female subjects. Same notation as in Table 1.
Table 3
 
Confusion matrices of 19 male subjects.
Table 3
 
Confusion matrices of 19 male subjects.
The results showed that women are consistently better at recognizing every emotion and that the percentages of error are diminished in women, although these confusions follow the same patterns seen in men. That was so at every image resolution. The only exception was sadness. Women were better at resolution 1. Men were more accurate and made less confusions at smaller resolutions. The female advantage in reading expressions of emotion was generally above the 1.5 standard deviations from the men average. In comparison, the differences between the confusion tables of Caucasian (Table 4) and non-Caucasian subjects (Table 5) were very small and not statistically significant from one another. 
Table 4
 
Confusion matrices of 16 Caucasian subjects.
Table 4
 
Confusion matrices of 16 Caucasian subjects.
Table 5
 
Confusion matrices of 15 non-Caucasian subjects.
Table 5
 
Confusion matrices of 15 non-Caucasian subjects.
Discussion
Understanding how humans analyze facial expressions of emotion is key in a large number of scientific disciplines—from cognition to evolution to computing. An important question in the journey to understanding the perception of emotions is to determine how these expressions are perceived at different image resolutions or distances. In the present work, we have addressed this question. 
The results reported above uncovered the recognition rates for six of the most commonly seen emotional expressions (i.e., happy, sad, angry, disgust, fear, surprise) and neutral as seen at five distinct resolutions. We have also studied the confusion tables, which indicate which emotions are mistaken for others and how often. We have seen that two of the emotions (happy and surprise) are easily recognized and rarely mistaken for others. Two other emotions (sadness and anger) are less well recognized and show strong asymmetric confusion with other emotions. Sadness is most often mistaken for neutral, anger for sadness and disgust. Yet, neutral is almost never confused for sadness, and sadness is extremely rarely mistaken for anger. The last two emotions (fear and disgust) were poorly recognized by our subjects. Nonetheless, their confusion patterns are consistent. Anger is very often mistaken for disgust. In fact, anger is sometimes classified more often as disgust than in its own category. Fear is commonly mistaken for surprise and, to a lesser degree, disgust, at short and mid-resolutions (i.e., 1 to 1/4). At small resolutions (i.e., 1/8), fear is also taken to be joy and sadness. 
The results summarized in the preceding paragraph suggest three groups of facial expression of emotions. The first group (happy and surprise) is formed by expressions that are readily classified at any resolution. This could indicate that the production and perception systems of these facial expressions of emotion coevolved to maximize transmission of information (Fridlund, 1991; Schmidt & Cohn, 2001). The second group (angry and sad) is well recognized at high resolutions only. However, with their reduced recognition rates even at the highest resolution, the mechanisms of production and recognition of these expressions may not have coevolved. Rather, perception may have followed production, since recognition of these emotions at proximal distance could prove beneficial for survival to either the sender or receiver. The third group (fear and disgust) includes expressions that are poorly recognized at any distance. One hypothesis (Susskind et al., 2008) is that they are used as a sensory enhancement and blocking mechanism. Under this view, without the cooperation of a sender willing to modify her expression, the visual system has had the hardest work in trying to define a computational space that can recognize these expressions from a variety of distances. As in the first group, the emotions in this third group are recognized similarly at all distances—except when the percept is no longer distinguishable at resolution 1/16. 
An alternative explanation for the existence of these three groups could be given by the priors assigned to each emotion. For example, University students and staff fell generally safe and happy. As a consequence, expressions such as happy could be expected, whereas fear may not. 
Perhaps more intriguing is the asymmetric patterns in the confusion tables. Why should fear be consistently mistaken for surprise but not vice versa? One hypothesis comes from studies of letter recognition (Appelman & Mayzner, 1982; James & Ashby, 1982). Under this model, people may add unseen features to the percept but will only rarely delete those present in the image. For instance, the letter F is more often confused by an E than an E is for an F. The argument is that E can be obtained from F by adding a non-existing feature, whereas to perceive F from an E would require to eliminate a feature. Arguably, the strongest evidence against this model comes from the perception of neutral in sad faces, which would require eliminating all image features indicating to the contrary. 
However, to properly consider the above model, it would be necessary to know the features (dimensions) of the computational space of these emotions. One possibility is that we decode the movement of the muscles of the face, i.e., the AUs correspond to the dimensions of the computational space (Kohler et al., 2004; Tian, Kanade, & Cohn, 2001). For example, surprise generally involves AUs 1 + 2 + 5 + 26 or 27. Fear usually activates 1 + 2 + 5 + 25 + 26 or 27 and it may also include AUs 4 and 20. Note that the AUs in surprise are a subset of those of fear. Hence, according to the model under consideration, it is expected that surprise will be mistaken for fear but not the other way around. Yet, surprise is not confused as fear, but fear is mistaken for surprise quite often. This means that active AUs such as 4, 20, or 25 should be omitted from the analysis. A more probable explanation is that the image features extracted to classify facial expressions of emotion do not code AUs. Further support for this latest point is given by the rest of the mistakes identified in Table 1. Sadness is confused for disgust, even though they do not share any common AU. Disgust and anger only share AUs that are not required to display the emotion. In addition, for anger to be mistaken as sadness, several active AUs should be omitted. 
We have also considered the subtraction model (Appelman & Mayzner, 1982; Geyer & DeWald, 1973), where E is most likely confused for F because it is easier to delete a few features than to add them. This model is consistent with the confusion of fear for surprise but is inconsistent with all other misclassifications and asymmetries. The results summarized in the last two paragraphs are consistent with previous reports of emotion perception in the absence of any active AU (Hess, Adams, Grammer, & Kleck, 2009; Neth & Martinez, 2009; Zebrowitz, Kikuchi, & Fellous, 2007). In some instances, features seem to be added while others are omitted even as distance changes (Laprevote et al., 2010). 
It could also be expected that expressions involving larger deformation are easier to identify (Martinez, 2003). The largest shape displacement belongs to surprise. This makes sense, since this expression is easily identified at any resolution. The recognition of surprise at images of 15 × 10 pixels is actually better than that of fear and disgust in the full resolution images (240 × 160 pixels). Happiness also has a large deformation and is readily successfully classified. However, fear and disgust include deformations that are as large (or larger) than happiness. Yet, these are the two expressions that are recognized most poorly. 
Another possibility is that only a small subset of AUs is diagnostic. Happy is the only expression with AU 12, which uplifts the lip corners. This can make it readily recognizable. Happy plays a fundamental role in human societies (Russell, 2003). One hypothesis is that it had to evolve a clearly distinct expression. Some AUs in surprise seem to be highly diagnostic too, making it easy to confuse fear (which may have evolved to minimize sensory input) for surprise. In contrast, sadness activates AU 4 (which lowers the inner corners of the brows) and disgust activates AU 9 (which wrinkles the nose). These two AUs are commonly confused for one another (Ekman & Friesen, 1978), suggesting that they are not very diagnostic. 
Differences in the use of diagnostic features seem to be further suggested by our results of women versus men. Women are generally significantly better in correctly identifying emotions and make less misclassifications. Other studies suggest that women are also more expressive than men (Kring & Gordon, 1998). Understanding gender differences is important not only to define the underlying model of face processing but also in a variety of social studies (Feingold, 1994). 
Before further studies can properly address these important questions, we need a better understanding of the features defining the computational model of facial expressions of emotion. The above discussion strongly suggests that faces are not AU-coded, meaning that the dimensions of the cognitive space are unlikely to be highly correlated with AUs. Neth and Martinez (2010) have shown that shape has a significant contribution in the perception of sadness and anger in faces and that these are loosely correlated to AUs. Similarly, Lundqvist, Esteves, and Öhman (1999) found that eyebrows are generally best to detect threatening faces, followed by the mouth and eyes. The results reported above suggest that this order would be different for each emotion class. 
Acknowledgments
The authors are grateful to Irv Biderman for discussion about this work. This research was supported in part by the National Institutes of Health under Grants R01-EY-020834 and R21-DC-011081 and by a grant from the National Science Foundation (IIS-07-13055). S. Du was also partially supported by a fellowship from the Center for Cognitive Sciences at The Ohio State University. 
Commercial relationships: none. 
Corresponding author: Aleix M. Martinez. 
Email: aleix@ece.osu.edu. 
Address: 205 Dreese labs, The Ohio State University, 2015 Neil Ave Columbus, OH 43210, USA. 
References
Adolphs R. (2003). Regret in decision making. Cognitive Neuroscience of Human Social Behaviour, 4, 165–178.
Appelman I. B. Mayzner M. S. (1982). Application of geometric models to letter recognition: Distance and density. Journal of Experimental Psychology, 111, 60–100. [CrossRef] [PubMed]
Burrows A. Cohn J. F. (2009). Anatomy of the face. In Li S. Z. (Ed.), Encyclopedia of Biometrics (pp. 16–23). Berlin, Germany: Springer.
Chapman H. A. Kim D. A. Susskind J. M. Anderson A. K. (2009). In bad taste: Evidence for the oral origins of moral disgust. Science, 323, 1222–1226. [CrossRef] [PubMed]
Connolly T. Zeelenberg M. (2002). Regret in decision making. Current Directions in Psychological Science, 11, 212–216. [CrossRef]
Damassio A. R. (1995). Descartes' error: Emotion, reason, and the human brain. New York: Putnam's Sons, G.P.
Duchenne G. (1990). The mechanism of human facial expression. Paris: Jules Renard/Cambridge University Press. (Original work published 1862).
Ekman P. Friesen W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.
Ekman P. Friesen W. V. (1978). Facial action coding system: A technique for the measurement of facial movement. Palo Alto, CA: Consulting Psychologists Press.
Feingold A. (1994). Gender differences in personality: A meta-analysis. Psychological Bulletin, 116, 429–456. [CrossRef] [PubMed]
Fridlund A. J. (1991). Evolution and facial action is reflex, social motive, and paralanguage. Biological Psychology, 32, 3–100. [CrossRef] [PubMed]
Geyer L. H. DeWald C. G. (1973). Feature lists and confusion matrices. Perception & Psychophysics, 14, 471–482. [CrossRef]
Gitter A. G. Black H. Mostofsky D. (1972). Race and sex in the perception of emotion. Journal of Social Issues, 28, 63–78. [CrossRef]
Gold J. Bennett P. J. Sekuler A. B. (1999). Identification of band-pass filtered letters and faces by human and ideal observers. Vision Research, 39, 3537–3560. [CrossRef] [PubMed]
Harmon L. D. Julesz B. (1973). Masking in visual recognition: Effects of two-dimensional filtered noise. Science, 180, 1194–1197. [CrossRef] [PubMed]
Hess U. Adams R. B. Grammer K. Kleck R. E. (2009). Face gender and emotion expression: Are angry women more like men? Journal of Vision, 9(12):19, 1–8, http://www.journalofvision.org/content/9/12/19, doi:10.1167/9.12.19. [PubMed] [Article] [CrossRef] [PubMed]
James T. T. Ashby F. G. (1982). Experimental test of contemporary mathematical models of visual letter recognition. Human Perception & Performance, 8, 834–864. [CrossRef]
Kanade T. Cohn J. Tian Y. (2000). Comprehensive database for facial expression analysis. Proceedings of the International Conference on Automatic Face and Gesture Recognition, 46–53.
Kohler C. G. Turner T. Stolar N. M. Bilker W. B. Brensinger C. M. Gur R. E. et al. (2004). Differences in facial expressions of four universal emotions. Psychiatry Research, 128, 235–244. [CrossRef] [PubMed]
Kring A. M. Gordon A. H. (1998). Sex differences in emotion: Expression, experience, and physiology. Journal of Personality and Social Psychology, 74, 686–703. [CrossRef] [PubMed]
Laprevote V. Oliva A. Delerue C. Thomas P. Boucart M. (2010). Patients with schizophrenia are biased toward low spatial frequency to decode facial expression at a glance. Neuropsychologia, 48, 4164–4168. [CrossRef] [PubMed]
LeDoux J. E. (2000). Emotion circuits in the brain. Annual Review of Neuroscience, 23, 155–184. [CrossRef] [PubMed]
Lundqvist D. Esteves F. Öhman A. (1999). The face of wrath: Critical features for conveying facial threat. Cognition and Emotion, 13, 691–711. [CrossRef]
Majaj N. J. Pelli D. G. Kurshan P. Palomares M. (2002). The role of spatial frequency channels in letter identification. Vision Research, 42, 1165–1184. [CrossRef] [PubMed]
Martinez A. M. (2003). Matching expression variant faces. Vision Research, 43, 1047–1060. [CrossRef] [PubMed]
Massey D. S. (2002). A brief history of human society: The origin and role of emotion in social life. American Sociological Review, 67, 1–29. [CrossRef]
Neth D. Martinez A. M. (2009). Emotion perception in emotionless face images suggests a norm-based representation. Journal of Vision, 9(1):5, 1–11, http://www.journalofvision.org/content/9/1/5, doi:10.1167/9.1.5. [PubMed] [Article] [CrossRef] [PubMed]
Neth D. Martinez A. M. (2010). A computational shape-based model of anger and sadness justifies a configural representation of faces. Vision Research, 50, 1693–1711. [CrossRef] [PubMed]
Parish D. H. Sperling G. (1991). Object spatial frequencies, retinal spatial frequencies, noise, and the efficiency of letter discrimination. Vision Research, 31, 1399–1415. [CrossRef] [PubMed]
Pentland A. (2000). Looking at people: Sensing for ubiquitous and wearable computing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 107–119. [CrossRef]
Rotter N. G. Rotter G. S. (1988). Sex differences in the encoding and decoding of negative facial emotions. Journal of Nonverbal Behavior, 12, 139–148. [CrossRef]
Russell J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110, 145–172. [CrossRef] [PubMed]
Schmidt K. L. Cohn J. F. (2001). Human facial expressions as adaptations: Evolutionary questions in facial expression. Yearbook of Physical Anthropology, 44, 3–24. [CrossRef]
Smith F. W. Schyns P. G. (2009). Smile through your fear and sadness: Transmitting and identifying facial expression signals over a range of viewing distances. Psychology Science, 20, 1202–1208. [CrossRef]
Susskind J. Lee D. Cusi A. Feinman R. Grabski W. Anderson A. K. (2008). Expressing fear enhances sensory acquisition. Nature Neuroscience, 11, 843–850. [CrossRef] [PubMed]
Tian Y. I. Kanade T. Cohn J. F. (2001). Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 97–115. [CrossRef] [PubMed]
Zebrowitz L. A. Kikuchi M. Fellous J. M. (2007). Are effects of emotion expression on trait impressions mediated by babyfaceness? Evidence from connectionist modeling. Personality and Social Psychology Bulletin, 33, 648–662. [CrossRef] [PubMed]
Figure 1
 
Facial expressions, from left to right: happiness, sadness, fear, anger, surprise, disgust, and neutral. Resolutions from top to bottom: 1 (240 × 160 pixels), 1 2 (120 × 80 pixels), 1 4 (60 × 40 pixels), 1 8 (30 × 20 pixels), and 1 16 (15 × 10 pixels).
Figure 1
 
Facial expressions, from left to right: happiness, sadness, fear, anger, surprise, disgust, and neutral. Resolutions from top to bottom: 1 (240 × 160 pixels), 1 2 (120 × 80 pixels), 1 4 (60 × 40 pixels), 1 8 (30 × 20 pixels), and 1 16 (15 × 10 pixels).
Figure 2
 
Stimulus timeline. A white fixation cross in black background is shown for 500 ms. Then, a stimulus image is shown for 500 ms, followed by a random noise mask for 750 ms. A 7AFC task is used. After the subject's response, the screen goes blank for 500 ms and the process is repeated.
Figure 2
 
Stimulus timeline. A white fixation cross in black background is shown for 500 ms. Then, a stimulus image is shown for 500 ms, followed by a random noise mask for 750 ms. A 7AFC task is used. After the subject's response, the screen goes blank for 500 ms and the process is repeated.
Figure 3
 
Recognition rates of the seven facial expressions as a function of image resolution. The horizontal axis defines the resolution and the vertical axis defines the recognition rate. For each emotion, solid lines connect the two points that are not statistically different and dashed lines connect points that are statistically different. The horizontal dash-dotted line indicates chance level, at ∼14%.
Figure 3
 
Recognition rates of the seven facial expressions as a function of image resolution. The horizontal axis defines the resolution and the vertical axis defines the recognition rate. For each emotion, solid lines connect the two points that are not statistically different and dashed lines connect points that are statistically different. The horizontal dash-dotted line indicates chance level, at ∼14%.
Table 1
 
Confusion matrices. The leftmost column is the response (perception) and the first row of each matrix specifies the emotion class of the stimulus. The diagonal elements are the recognition rates and the off-diagonal entries correspond to the error rates. Resolutions from top to bottom: 1, 1 2 , 1 4 , 1 8 , and 1 16 . The chance level is 14%. An asterisk highlights the entries that are statistically different from noise. A grayscale color palette of 10 scales was used to color code the percentages from 0 (light) to 1 (dark).
Table 1
 
Confusion matrices. The leftmost column is the response (perception) and the first row of each matrix specifies the emotion class of the stimulus. The diagonal elements are the recognition rates and the off-diagonal entries correspond to the error rates. Resolutions from top to bottom: 1, 1 2 , 1 4 , 1 8 , and 1 16 . The chance level is 14%. An asterisk highlights the entries that are statistically different from noise. A grayscale color palette of 10 scales was used to color code the percentages from 0 (light) to 1 (dark).
Table 2
 
Confusion matrices of 14 female subjects. Same notation as in Table 1.
Table 2
 
Confusion matrices of 14 female subjects. Same notation as in Table 1.
Table 3
 
Confusion matrices of 19 male subjects.
Table 3
 
Confusion matrices of 19 male subjects.
Table 4
 
Confusion matrices of 16 Caucasian subjects.
Table 4
 
Confusion matrices of 16 Caucasian subjects.
Table 5
 
Confusion matrices of 15 non-Caucasian subjects.
Table 5
 
Confusion matrices of 15 non-Caucasian subjects.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×