Free
Article  |   November 2014
Eye movements during emotion recognition in faces
Author Affiliations
  • M. W. Schurgin
    Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
    maschurgin@jhu.edu
  • J. Nelson
    Loyola University Chicago, Chicago, IL, USA
  • S. Iida
    Nagoya University, Chikusa-ku, Nagoya, Japan
  • H. Ohira
    Nagoya University, Chikusa-ku, Nagoya, Japan
  • J. Y. Chiao
    Northwestern University, Evanston, IL, USA
  • S. L. Franconeri
    Northwestern University, Evanston, IL, USA
Journal of Vision November 2014, Vol.14, 14. doi:https://doi.org/10.1167/14.13.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      M. W. Schurgin, J. Nelson, S. Iida, H. Ohira, J. Y. Chiao, S. L. Franconeri; Eye movements during emotion recognition in faces. Journal of Vision 2014;14(13):14. https://doi.org/10.1167/14.13.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.

Introduction
Humans are remarkable in their ability to rapidly and efficiently decode emotional expressions, reflecting their importance for successful social interaction (Schyns, Petro, & Smith, 2009; Smith, Cottrell, Gosselin, & Schyns, 2005). Deficits in the ability to accurately discern and respond to the emotional state of others are associated with a range of socioemotional disorders, from autism to psychopathy (Baron-Cohen & Wheelwright, 2004; Blair, 2005; Marsh, Kozak, & Ambady, 2007; Sasson et al., 2007). Across cultures, people recognize at least six primary expressions of emotion from the face, including joy, sadness, fear, anger, disgust, and surprise (Ekman & Friesen, 1975; but see Jack, Blais, Scheepers, Schyns, & Caldara, 2009, for evidence of diversity in these abilities), as well as facial expressions of self-conscious emotions, such as shame or embarrassment (Hejmadi, Davidson, & Rozin, 2000; Keltner & Buswell, 1997) and pride (Tracy & Robins, 2004). 
Facial expressions of basic emotion are produced with characteristic configurations of facial muscle movements that provide the perceptual basis for discriminating between distinct types of emotional expressions (Ekman & Friesen, 1978). For instance, the facial expression of fear consists of a widening of the eyes and flexing of mouth muscles whereas the facial expression of joy consists of a restriction of the eyes and an alternative flexing of mouth muscles (Ekman & Friesen, 1978). Emotional expressions may have originated as functional adaptations to benefit the expresser and only become communicative as a secondary function through heredity and continued practice (Darwin, 1872). For example, when subjects posed fear expressions, patterns of eye movements and nasal volume suggest a perceptual enhancement of one's environment whereas the opposite pattern was observed for disgust (Susskind et al., 2008). 
Different regions of a face contain more or less information required for categorization of facial emotion (Smith et al., 2005; Spezio, Adolphs, Hurley, & Piven, 2007) as well as facial identity (Gosselin & Schyns, 2001; Zhao, Chellappa, Phillips, & Rosenfeld, 2003). Numerous studies show evidence for strategic deployment of attention to more diagnostic regions, typically indexed by eye movements, an overt reflection of attentional deployment (Kowler, Anderson, Dosher, & Blaser, 1995). For example, rhesus monkeys spend significantly more time fixating at the eyes while viewing threatening faces in comparison to other faces, such as a lip smack or yawn, for which they more strongly fixate at the mouth (Nahm, Perret, Amaral, & Albright, 1997). Humans display individual differences in fixation patterns when viewing faces, and these patterns also differed with a single person across different tasks, yet these patterns were surprisingly reliable across examples for a given participant and task (Walker-Smith, Gale, & Findlay, 1977). 
While it is not always clear whether such attentional strategies actually improve performance, there are some cases in which selecting specific regions of the face appears necessary for successful emotion recognition. For example, patients with bilateral amygdala damage are relatively inaccurate at recognizing fear from the face compared to healthy controls (Adolphs, Tranel, Damasio, & Damasio, 1994), and a large contributor to this deficit may be a lack of attention to the eye region of the face. Remarkably, when given explicit instruction to look or move their eyes toward the eye region of a facial expression, amygdala damage patients are able to recognize fear (Adolphs et al., 2005), suggesting that the patient's core deficit is not in fear recognition, per se, but rather in selectively looking at the regions of the face most diagnostic for successful fear recognition. As a potentially similar example, autistic individuals typically look longer at nondiagnostic relative to diagnostic (e.g., eyes, nose, mouth) regions of the face during emotion recognition, likely contributing to emotion recognition deficits (Pelphrey et al., 2002). Other work on a typical population showed that when eye-gaze patterns were restricted, memory encoding of face identity was impaired, relative to a condition in which gaze was allowed to range freely (Henderson, Williams, & Falk, 2005). 
Strategic deployment of attention within a face is driven not just by low-level visual properties, but also by the traits and goals of the observer. Spider phobics display slower eye-movement patterns when viewing fear-relevant stimuli (Pflugshaupt et al., 2007) relative to controls. Pessimists spend more time looking at negative scenes, such as images of skin cancer (Segerstrom, 2001) relative to optimists. People high in neuroticism look longer at the eye region of fear faces (Perlman et al., 2009) relative to those low in neuroticism. Easterners are more likely to fixate on the eye region of the face to a greater extent compared to Westerners, who are more likely to sample more evenly across facial regions (Jack et al., 2009). The affective context in which identical faces are embedded has also been shown to dramatically alter emotional perception (Aviezer et al., 2008). These findings reveal individual and group variation in eye movements during emotion recognition driven by goal-driven biases in cognitive processing. 
The processing demands of emotion recognition appear to trigger specific patterns of attention across a face. Here we explore how attentional deployment, as reflected by eye-movement patterns, differ during detection of six classic emotional expressions. We also test whether these patterns are driven in a stimulus-driven (properties of the emotional face stimulus itself) versus goal-driven (perceptual strategies that would occur even on a neutral face) manner. 
Experiment 1: Eye tracking of emotional judgments
We recorded eye movements as participants discriminated neutral from emotional expressions of particular emotions. We blocked these decisions by emotion type so that we could examine fixation patterns on emotional faces (reflecting both stimulus-driven and goal-driven patterns) and neutral faces (reflecting only goal-driven patterns). 
Methods
Participants
Fifty-one college-aged participants (23 male, 28 female) completed Experiment 1. All participants gave consent and received course credit for their participation. Results from an additional 16 participants were not analyzed due to unreliable eye recordings, typically due to interference from eyeglasses, contact lenses, or mascara. 
Stimuli
Stimuli for the current experiment consisted of 228 gray scale photographs taken from the Montreal Set of Facial Displays of Emotion (Beaupre & Hess, 2005). This facial photo set was comprised of 12 unique facial identities, each posing in one of six emotional expressions as well as neutral. These identities consisted of an equal number of African Americans, Asians, and Caucasians, distributed evenly across gender (six male, six female) to maximize the generizability of our findings across these variables, which can strongly affect face recognition performance (Elfenbein & Ambady, 2002; Jack et al., 2009; O'Toole, Deffenbacher, Valentin, & Abdi, 1994; Wrist & Sladden, 2003). 
For each of these identities, we created six linear interpolation morphs between the neutral face and each emotion (Figure 1A) at four levels: 0% emotional (only the neutral face with no contribution from the emotional face), and 20%, 40%, and 60% emotional intensity (contribution from the emotional face). This manipulation ensured that the judgment task would be difficult, maximizing the importance of selecting the most diagnostic regions of the face. All photographs were standardized for size (800 × 600 pixels) and background color. The majority of the face portion of the image subtended an average of 8° in width and 10° in height. A more conservative measure of the boundaries of a “face” (i.e., the vertical measure of the face includes the entire forehead up to the hairline) results in an image 9.8° in width and 12.2° in height. Both values roughly conform to the visual angle of a face in normal human interactions (see also Henderson et al., 2005; Hsiao & Cottrell, 2008). 
Figure 1
 
Illustration of face stimuli (A) and experimental paradigm (B). Participants were presented with blocks of trials containing half neutral faces (0% intensity) and half emotional faces (varying from 20%–60% intensity) and were asked to judge whether each face had any amount of a particular emotion present (e.g., fear).
Figure 1
 
Illustration of face stimuli (A) and experimental paradigm (B). Participants were presented with blocks of trials containing half neutral faces (0% intensity) and half emotional faces (varying from 20%–60% intensity) and were asked to judge whether each face had any amount of a particular emotion present (e.g., fear).
Apparatus
The experiment took place in a dimly lit room. Stimuli were presented on a 17-in. CRT monitor (resolution of 800 × 600 pixels, 85-Hz refresh rate, 25.5 pixels/° of visual angle) located approximately 63 cm from participants' eyes and were generated using SR Research Experiment Builder on Windows XP. Viewing distance was maintained throughout the experiment. Eye movements were recorded at a sampling rate of 1000 Hz (pupil-only mode) with an EyeLink 1000 eye tracker (SR Research, 0.15° resolution) using a desktop camera-mount illuminator. Normal sensitivity settings were used, setting the saccade threshold at either 30°/s for velocity or 8000°/s/s for acceleration. A fixation was defined as any period that is not a blink or saccade. Responses were collected using a USB Sidewinder gamepad. 
Procedure
This experiment consisted of a total of 144 trials (Figure 1B) presented as six blocks of trials (one block of trials per emotion type) shown in one of six possible pseudorandom counterbalanced orders. Each block consisted of 24 trials per block, each consisting of 12 neutral and 12 emotional face trials (equally divided into 20%, 40%, or 60%) presented in randomized order. The 12 facial identities could not be fully crossed within each block or participant but were counterbalanced such that they were equally frequently associated with each emotional intensity level across participants. The experiment began with an eye-tracking calibration screen consisting of nine dots spread around a screen. This calibration was repeated, as needed, throughout the experiment. 
At the beginning of each trial, participants fixated at a central cross on a gray screen. Participants started the trial by pressing a button on the gamepad while fixating within a required 2° radius of screen center. A photo would then appear in the center of the screen for 3 s, followed by a blank gray screen. Participants then judged whether the photo depicted a neutral face or a face with even a slight amount of emotion. They indicated their responses by pressing one of two buttons on the gamepad. 
Eye-tracking analysis
For each facial stimulus, 21 face regions were defined according to the template shown in Figure 2. All analyses used percentage of total fixation time over these regions, and Figure 2 also shows overall fixation rates for each region across all conditions. Substituting the number of fixations for total fixation duration did not change the patterns of results reported below; the number of fixations and total fixation duration were highly correlated. All percentages omit fixations that begin before the presentation of the image and fixations that were beyond the bounds of the face image. For analyses examining the time course of fixation rates, the dependent measure is the percentage of all fixations within a given region within a given time window (i.e., 3). This measure was not normalized relative to the size of each region because there appears to be little relationship between region size and fixation table (see Figure 2). Fixation rates are not directly compared across regions because these rates are not independent of each other. Instead, for each region, fixation rates are compared across different conditions and time windows. The distribution of fixation time across regions yielded five main facial regions (eyes, upper nose, lower nose, upper lip, nasion) that together accounted for 88.03% of all fixations. These high fixation frequencies were not a result of these areas being physically larger. Some of the largest defined areas within the face showed extremely low fixation rates, including the right and left cheek (2.6%, 2.2%), the forehead (0.9%), the hair (0.2%), and the largest region, the background (0.23%). Hence, we restricted our subsequent statistical analyses on eye-movement data to these five main facial regions. 
Figure 2
 
Illustration of 21 facial regions of interest (ROIs). We identified five main facial ROIs—eyes (green), upper nose (blue), lower nose (orange), upper lip (red), and nasion (purple)—that accounted for more than 88% of all fixations.
Figure 2
 
Illustration of 21 facial regions of interest (ROIs). We identified five main facial ROIs—eyes (green), upper nose (blue), lower nose (orange), upper lip (red), and nasion (purple)—that accounted for more than 88% of all fixations.
Results
Behavioral results: Emotional intensity ratings
We first examined the percentage of trials that participants rated as “emotional” as a function of emotion type and intensity (see Figure 3). A one-way ANOVA with the six emotions as a factor revealed a significant effect of emotion type on the emotional judgment task, F(5, 245) = 10, p < 0.001. This effect was driven by higher overall emotional ratings during blocks containing sad faces (M = 66.2%) relative to all other emotion blocks, all ts(49) > 4.2, all ps < 0.001. Across all trials, joy (M = 62.2%) and disgust (M = 62.3%) were rated slightly lower in emotional intensity compared to sadness, both ts(49) > 4.6, ps < 0.001, and higher compared to fear and sadness, both ts(49) > 3.8, both ps < 0.001. Shame (M = 58.5%) and anger (M = 58.9%) were slightly lower but still higher than fear (M = 54.8%), both ts(49) > 3.2, both ps < 0.002. 
Figure 3
 
Percentage of trials rated as “emotional” as a function of emotion type and intensity. These data demonstrate the varying difficulty of the task, which was meant to encourage selection of the most diagnostic regions of the face. Across all emotions, performance did not reach above chance until 40% intensity and was near ceiling for 60% intensity images.
Figure 3
 
Percentage of trials rated as “emotional” as a function of emotion type and intensity. These data demonstrate the varying difficulty of the task, which was meant to encourage selection of the most diagnostic regions of the face. Across all emotions, performance did not reach above chance until 40% intensity and was near ceiling for 60% intensity images.
As a measure of the effect of emotional intensity manipulation on ratings, we compared the slope of the function relating emotion intensity (0%, 20%, 40%, 60%) in the face image to ratings. We omitted 60% emotion from this slope because ratings overall were above 90% and produced little slope variance. Among emotion blocks, there were differences in the slope from zero to 40% emotion, F(5, 245) = 17.7, p < 0.001. Joy (M = 77% difference in rating value) and disgust (M = 74% difference) were higher than fear (M = 62% difference), t(49) > 2.4, p < 0.021. Anger (M = 47% difference), sadness (M = 45%), and shame (M = 43% difference) were all lower than fear, all ts(49) > 2.47, p < 0.01. 
Behavioral results: Emotional judgment accuracy
We then evaluated whether these emotional judgments were “correct,” according to our definition that any face containing 0% intensity of the emotional face was “neutral,” and any face containing 20%–60% intensity of the emotional face should be judged as “emotional.” This definition of “correct” is as arbitrary as our threshold for emotional content. The goal of the task was to engage the participant in recognizing a particular emotion, and our analysis of performance is only intended as confirmation that participants performed the task and that performance was at neither floor nor ceiling level. 
Our analysis confirmed these assumptions. Participants' overall accuracy ranged from 66.0% to 86.8% with a mean of 75.5%. Emotion judgment accuracy differed across the six emotion block conditions, F(5, 245) = 9.6, p < 0.001. Participants recognized joy (M = 82.7% correct) and disgust (M = 79.5%) with higher accuracy than all others, t(49) > 2.3, all ps < 0.03. Participants showed lower accuracy when recognizing fear (M = 76.2%) and anger (M = 73.5%). However, participants were more accurate in recognizing fear compared to sadness (M = 70.7%) and shame (M = 70.4%), both t(49) > 2.6, both p < 0.013. These accuracy rates suggest that, although there were differences among emotion blocks, overall performance was roughly similar and far from floor or ceiling levels, confirming that participants performed the judgment task as designed. 
Gaze patterns across emotions
The following analyses only included the first four fixations of each trial, with the “first” fixation defined as the first new landing of the eye after the appearance of the image. As will be detailed later, participants made an average of 8.3 fixations per trial, and we found the first few fixations to be the most diagnostic in terms of capturing differences across emotions. Although some of our later analyses suggest that only the first two fixations are most critical for emotion recognition, here we conservatively analyze the first four and examine the changes in pattern across this initial range in a later section. For the following analyses, isolating the data to the first four fixations produces a pattern of results that is qualitatively similar to using more (e.g., eight) fixations. All analyses were performed irrespective of whether a response on a trial was correct or incorrect. 
Emotional faces
For each region, fixation rates on emotional faces (those with 20%, 40%, or 60% emotional intensity) were submitted to one-way ANOVA with emotion block as a factor. In these trials, eye movements can be affected by both the emotional content of the face images (stimulus-driven information) and the fact that the observer is currently seeking that emotion type (goal-driven information). Figure 4A shows fixation rates for each face region across the six emotions. We summarize the major findings at the end of this section with additional information and statistical analyses detailed in the supplemental materials
Figure 4
 
(A) For each emotional face, fixation time spent within each main ROI. (B) For each neutral face, fixation time spent within each main ROI. * Designates t test p < 0.05 relative to the mean for each emotion.
Figure 4
 
(A) For each emotional face, fixation time spent within each main ROI. (B) For each neutral face, fixation time spent within each main ROI. * Designates t test p < 0.05 relative to the mean for each emotion.
For the eyes, there was a main effect of face emotion, F(5, 250) = 27.2, p < 0.001, whereby participants looked longer at the eye region in facial expressions of anger (M = 35.2%), fear (M = 30.8%), sadness (M = 34.2%), and shame (M = 34.3%) and looked less within the eye region for disgust (M = 19.7%) and joy (M = 19.5%) faces, relative to the mean (M = 28.9%), all ts(50) > 8.2, ps < 0.001. For the upper nose, there was an effect of emotion condition, F(5, 250) = 4.7, p < 0.001, whereby participants looked marginally less at joyful faces (M = 18.7%), relative to the mean (M = 21.8%), all ts(50) > 1.58, all ps < 0.1. For the lower nose, there was no effect of emotion condition, F < 1. For the upper lip, there was an effect of emotion condition, F(5, 250) = 27.2, p < 0.001, whereby participants looked longer at the upper lip for disgust (M = 15.5%) and joy faces (M = 20.9%) and less at the upper lip for anger (M = 8.2%) and sad (M = 6.8%) faces, relative to the mean (M = 12.1%), all ts(50) > 2.4, all ps < 0.02. For the nasion, there was an effect of emotion condition, F(5, 250) = 3.1, p < 0.01, whereby participants looked less within the nasion for fear faces (M = 3.8%), relative to the mean (M = 6.1%), t(50) = 3.9, p < 0.001. 
To better visualize the differences among emotion conditions, Figure 5 graphically illustrates fixation rates for each of the top five regions across the six emotion conditions, relative to the average rate of region fixation collapsed across all emotions. The top row of the figure depicts a correlate of the most diagnostic information in the image that an ideal observer might use to distinguish between emotional and neutral faces—an image subtraction of the 60% emotional image from the neutral image for a single face identity (this face was chosen because it had uniquely low head movement across photographs, required for this technique to reveal a useful contrast). Areas of greater difference are depicted with white and lower difference with black, respectively. 
Figure 5
 
Top row: Image subtractions between the emotional and neutral versions of a particular face from the image set used in Experiment 1, revealing areas of maximal diagnostic information (white) for judging presence of emotion in the face. Middle row: Emotional fixation rates for each of the top five regions across the six emotion conditions, relative to the average rate of region fixation collapsed across all emotional face trials. Bottom row: Diagnostic information for judgment of each emotion type using the “Bubbles” technique (adapted with permission from Smith et al., 2005).
Figure 5
 
Top row: Image subtractions between the emotional and neutral versions of a particular face from the image set used in Experiment 1, revealing areas of maximal diagnostic information (white) for judging presence of emotion in the face. Middle row: Emotional fixation rates for each of the top five regions across the six emotion conditions, relative to the average rate of region fixation collapsed across all emotional face trials. Bottom row: Diagnostic information for judgment of each emotion type using the “Bubbles” technique (adapted with permission from Smith et al., 2005).
The middle row of the figure graphically illustrates fixation rates for each of the top five regions across our six emotion conditions, relative to the average rate of region fixation collapsed across all emotions. 
The bottom row of the figure contains images from Smith and colleagues (2005) using the “bubbles” technique to find the maximally diagnostic regions of faces for judging each emotion. The similarities within columns of this figure are striking, despite the independence of the data sources for each. For example, in the middle row, note the relatively high fixation rates on the lips for joy and disgust in contrast to the relatively high fixation rates on the eyes for anger, sadness, and shame. Both the top and bottom rows confirm the relatively high diagnostic value of these areas for those particular emotion judgments. 
Neutral faces
In each emotion block, half of the faces were nonemotional (neutral). Because these faces were identical across blocks, any difference in fixation rates represent solely goal-driven strategies resulting from seeking a given emotion and cannot be due to stimulus-level differences in the images. The results will show that weaker versions of emotion-specific strategies endure across these neutral faces. Figure 4B depicts fixation rates within each block across these neutral faces. For each region of the face, we report the results of a one-way ANOVA seeking differences among the six emotion block types but focusing only on the neutral faces. 
For the eyes, the emotion block analysis revealed an effect of emotion condition across neutral faces, F(5, 250) = 7.7, p < 0.001, whereby participants looked longer for anger (M = 33.4%), fear (M = 34.3%), sad (M = 36%), and shame (M = 36.1%) faces. By contrast, participants looked less for disgust (M = 28.6%) and joy (M = 29.1%) faces, relative to the mean (M = 32.9%), all ts(50) > 10.3, all ps < 0.001. For the upper nose, the emotion block analysis revealed that there was still no effect of emotion condition across neutral faces, F < 1. For the lower nose, the emotion block analysis revealed that there was still no effect of emotion condition across neutral faces, F < 1. For the upper lip, the emotion block analysis reveals that there was still an effect of emotion condition across neutral faces, due to relatively high fixation rates for the joy (M = 14%) condition (disgust was no longer significant) and lower fixation rates for sadness (M = 7.9%) condition (anger was no longer significant), relative to the mean (M = 9.9%), all ts(50) > 1.97, all ps ≤ 0.055, F(5, 250) = 8.6, p < 0.001. For the nasion, the emotion block analysis revealed that there was still a marginal effect of emotion condition, F(5, 250) = 2.3, p = 0.04. Although the pattern was similar, the change in the mean value caused a new set of emotion conditions to be different from the mean. Fear was no longer significantly different, but anger (M = 8.7%) had relatively high fixation rates, t(50) = 1.84, p = 0.07. 
In summary, even when the neutral faces were identical across emotion judgment blocks (e.g., anger, joy), there were still effects of emotion judgment type on patterns of fixation. In fact, the majority of effects found in the emotional faces were still present in the neutral faces although the effects were not as robust. One exception was fixations to the nasion, where neutral faces showed a pattern of fixation not present for emotional faces, but this effect may be due to the changing baseline of the mean number of fixations for nasion fixations overall. The fact that most fixation trends remained for neutral faces that were identical across blocks suggests that a substantial portion of the effect of seeking a specific emotion is due to goal-driven influences on fixation patterns. 
Summary of intensity level analyses
In addition to dividing face trials into emotional (20%, 40%, and 60% emotional) and neutral (0% emotional) sets, we also examined differences among the emotional face trials as they transitioned from 20% to 60%. We present a summary of this analysis here and provide more detailed information and statistical analyses in the supplemental materials. As the intensity of angry expressions increased, fixations increased to the eye region except for the anger faces shown at 60% intensity, which showed a reduction in fixations to the eye region. It is possible that the eyes were most diagnostic when the emotion was more ambiguous. A similar pattern was observed at the nasion, where more neutral faces were fixated to a greater extent than more emotional faces. For disgust judgment blocks, there was a particularly strong tradeoff in fixation position. When faces were more neutral, participants fixated at the eyes more frequently. However, as the intensity of the disgust expression increased, participants looked less at the eye region and more toward the upper nose and upper lip. As the intensity of fearful expressions increased, participants looked toward the upper nose and upper lip and less at the nasion. As the intensity of joyful expressions increased, there was another particularly strong effect as fixations increased to the upper lip and decreased to the eyes and lower nose. As the intensity of sad expressions increased, the distribution of fixations over the five face regions did not significantly change. As the intensity of shameful expressions increased, at 60% intensity, there were more fixations over the upper lip, trading off with fewer fixations over the lower nose. 
Two of the largest effects of emotional intensity were for disgust and joy expressions. For disgust, as emotional intensity conveyed in the face increased, people showed greater fixation particularly to the upper nose (close to the particularly salient wrinkle that appears between the eyes during an expression of disgust), and for joy, people showed greater fixation to the upper lip (the location of the smile). In both cases, when faces become more neutral, these fixation rates decrease, and instead the eyes are fixated more frequently. 
Gaze patterns across time
The analyses above collapse the first four fixations across viewing time. Here, we examine how these patterns evolve over that time. Understanding the time course of fixation patterns can establish whether the patterns observed above are stable and persistent across viewing time or if fixation patterns are more variable and dynamic. We examined both the probability of fixating on a given region of each type of emotional face within each 250-ms time interval as well as for each fixation number. Although the patterns were highly similar, fixation number appeared to reveal more differences, especially in the earlier viewing period. In contrast, for the time interval analysis, differences in fixation patterns among conditions appeared to be more “smeared” across time, suggesting that the fixation number analysis might reveal more about participants' fixation patterns. 
The average duration of each fixation was 301 ms, and participants made between one and 15 fixations. Although participants made up to 15 fixations, there were few trials with this many fixations. Because the average number of fixations was 8.3 per image, we examined only the first through the eighth fixation. For each of the following analyses, we use the term “fixation rate,” defined as “for all of the Nth fixations on an image in the current condition, the percentage that are within a given region.” This manner of calculating fixation rates removed the influence of the general drop in the number of total fixations available as fixation number increases (i.e., there are always more third fixations than fourth fixations). 
Across previous data sets using similar images and tasks as well as the present data set, we observed that fixation patterns differed primarily among the first one through four fixations. The first few fixations were typically more variable across time and condition, and the remaining fixation rates tended to stabilize. These later fixations reveal differences among conditions but vary far less over time. Therefore, we decided to focus our analyses on the first four fixations (a detailed analysis across all eight fixations is included in the supplemental section). 
In all of the following analyses, we compare fixation rates for each region across the six emotion recognition conditions. To maximize the differences among conditions, for these analyses involving time, we use only the emotional face images and omit the neutral images. A separate analysis of the neutral emotion images revealed similar patterns but to a weaker extent, similar to the neutral image analysis reported above. 
Patterns of gaze across time by emotion
Figure 6 provides fixation distributions for the first three emotional fixations (20%–60%) made to a particular face presented in the study. Overall, the eyes were fixated less across all fixations in the joy and disgust conditions, especially during the first three fixations. In contrast, the eyes were fixated at uniformly high rates across time in the anger and sadness conditions and at increasing rates across time in the fear and shame conditions. The upper nose was a consistently high target across all emotions during the first fixation. However, despite this popularity demonstrated, there was still considerable variability in fixation patterns across emotions. The lower nose showed wide variability across all emotions at the first fixation, but afterward fixation rates remained much more consistent across emotions. The upper lip was fixated at particularly high rates during the second fixation for joy and disgust and to a weaker degree, fear. The nasion showed wide variability in fixation patterns across emotions although there was a high fixation preference at the first fixation for shame. 
Figure 6
 
Fixation distributions for the first three fixations on emotional (20%–60%) face image trials for one face identity. As participants made more fixations, distinct patterns emerged across emotions in what regions were fixated most. Specifically, at the first fixation, the upper nose was a particular popular fixation location although there was still considerable variability at the region across emotions. As participants made additional fixations, a general pattern emerged such that there was increased fixation at the upper lip for joy and disgust whereas there was increased fixation at the eyes for anger, sadness, and shame.
Figure 6
 
Fixation distributions for the first three fixations on emotional (20%–60%) face image trials for one face identity. As participants made more fixations, distinct patterns emerged across emotions in what regions were fixated most. Specifically, at the first fixation, the upper nose was a particular popular fixation location although there was still considerable variability at the region across emotions. As participants made additional fixations, a general pattern emerged such that there was increased fixation at the upper lip for joy and disgust whereas there was increased fixation at the eyes for anger, sadness, and shame.
Asymmetry analysis
Previous research has shown that initial fixations toward faces are biased toward the left side of the face, followed by a rightward fixation (Everdell, Marsh, Yurick, Munhall, & Paré, 2007). We tested for this effect in the present data by coding each fixation as being toward the left or right, excluding center regions of the face (UN, LN, UL, LL). A one-way ANOVA with fixation number (one through eight) and block type (emotional or neutral) as factors showed a main effect of fixation number, F(1, 7) = 4.208, p < 0.001. This effect was driven by the first fixation, and removing that fixation from the analysis eliminated the main effect, F(1, 6) = 0.579, p = 0.747. The first fixation during trials across block type had a strong bias toward the left side of the screen (M = 58.1%) in comparison to fixations two through eight (M = 46.2%), which demonstrated a slight bias toward the right side of the screen. These effects confirm the initial leftward bias in face fixation patterns. Possible roots of this effect, including a learned strategy that the right side of the face contains the most information or a general perceptual bias toward the left visual hemifield, are discussed in detail in Everdell et al. (2007). 
Fixation patterns predict facial emotion
If the type of facial emotion displayed (or sought by the observer) systematically affects the observer's pattern of eye movements, it should be possible to predict the type of face displayed (or sought) by statistical analysis of the eye movement sequence for any given trial. We separately submitted the first 1–N fixation regions (where N = 1 to 8) from every trial in the data set to a naive Bayesian classifier for categorical data (the NaiveBayes class of Matlab), using two thirds of the data for training and one third for testing. We calculated the average classification accuracy across 100 fit/test iterations, selecting train/test sets independently for each iteration for (a) all trials, (b) emotional image–only trials, and (c) neutral trials. 
Figure 6 depicts classification accuracy for each condition as well as chance prediction (for six emotions, 16.6%). Emotional-image trial classification accuracy of test trials was highest (with a peak of around 25% accuracy) for the emotional image–only trial set, consistent with the stronger fixation patterns described in our other analyses. Neutral-image trial classification accuracy was lower but still well above chance performance (with a peak of around 20% accuracy). Including all trials led to performance in between these two (with a peak of around 23% accuracy), consistent with Hsiao and Cottrell (2008), who showed that the first one to two fixations were most critical for face processing, the diagnostic value of our fixation data was primarily in the first two fixations. It is clear from Figure 7 that the bulk of predictive power is reached by the second fixation. 
Figure 7
 
Classification accuracy for each condition (emotional face trials, neutral face trials, both trial types, and chance prediction). Classification accuracy for emotional trials was highest, consistent with the stronger fixation patterns observed in our other analyses. Neutral trial classification accuracy was lower but still well above chance. Consistent with the observed fixation patterns of humans, the diagnostic value of fixation data was primarily reached in the first two fixations.
Figure 7
 
Classification accuracy for each condition (emotional face trials, neutral face trials, both trial types, and chance prediction). Classification accuracy for emotional trials was highest, consistent with the stronger fixation patterns observed in our other analyses. Neutral trial classification accuracy was lower but still well above chance. Consistent with the observed fixation patterns of humans, the diagnostic value of fixation data was primarily reached in the first two fixations.
Finally, we examined the confusion matrix produced by the classifier (Figure 8): When the wrong emotion is predicted, which emotions are systematically confused because they are most similar? We focused here on two subsets of trials: the highest emotion (60%) images and the neutral image trials. For the highest emotion images, two pairs of emotions stood out as having the most confusable fixation patterns: joy and disgust and anger and sadness. This confusability is consistent with the results depicted in Figure 5, which reveals strong similarities between the fixation patterns produced by these pairs of images. Two pairs of emotions stood out as the least confusable: joy and sadness and joy and anger. This is also consistent with the results depicted in Figure 5, in which the fixation pattern for joy contrasts strongly with the pattern for anger and sadness. For the neutral-trial subset, the most confusable image pair was again joy and disgust, and the least confusable was again joy and sadness. Other pairings did not stand out as strong outliers for this subset. 
Figure 8
 
Confusion matrices for the Bayesian classifier, including information from the first eight fixations from trials with photographs showing the highest emotion level (left column) and trials with neutral emotion photographs (right column). The top row contains full confusion matrices with bold type indicating significant differences from expectations of chance (16.67%), conservatively Bonferroni corrected for 36 comparisons. To facilitate visual inspection, cell values are redundantly color coded for magnitude within each table. The bottom row removes bold formatting and folds each table along its identity diagonal to depict mutual confusability regardless of whether a pair of classifications is actual and predicted or predicted and actual, respectively. Boxed values are highlighted and discussed in the text.
Figure 8
 
Confusion matrices for the Bayesian classifier, including information from the first eight fixations from trials with photographs showing the highest emotion level (left column) and trials with neutral emotion photographs (right column). The top row contains full confusion matrices with bold type indicating significant differences from expectations of chance (16.67%), conservatively Bonferroni corrected for 36 comparisons. To facilitate visual inspection, cell values are redundantly color coded for magnitude within each table. The bottom row removes bold formatting and folds each table along its identity diagonal to depict mutual confusability regardless of whether a pair of classifications is actual and predicted or predicted and actual, respectively. Boxed values are highlighted and discussed in the text.
Experiment 1 summary
Participants moved their eyes in distinctive patterns for each type of emotional image. These patterns remained (although weaker) for neutral images in each emotion's block, revealing that these patterns were at least partially goal-directed. The pattern of fixations alone could predict the type of image being sought (even if it was not present) at above chance levels in a naive Bayesian classifier. 
Participants preferentially fixate certain regions that they, at least implicitly, think are more diagnostic for recognition of different types of emotions. Experiment 2 verifies that the two broadest differentiable regions—the eyes and the mouth—do indeed differ in how well they signal particular emotions. 
Experiment 2: Varying diagnostic information of faces alters performance
In order to verify whether different areas of our face stimuli contain more or less diagnostic information for each emotion, we asked a new set of participants to make similar judgments of the emotional content of faces, but we occluded information either from the eye region or the mouth region. We predicted that for emotion judgments for which participants in Experiment 1 fixated the eye region at relatively high rates (i.e., anger and shame), occluding that region would decrease emotion detection accuracy. In contrast, for emotions for which the mouth region was particularly highly fixated (i.e., joy and disgust), occluding that region would decrease emotion detection accuracy. 
Methods
Participants
Ten college-aged participants completed Experiment 2. All participants gave consent and received course credit for their participation. 
Stimuli
Stimuli were identical to that of Experiment 1 with the following exceptions: For emotional faces, only one emotional morph was used per condition. Anger, disgust, fear, and sadness consisted of 40% emotional intensity morphs whereas joy consisted of only 20% and shame 60%. We determined the specific emotional intensity per emotion during piloting in order to ensure participants could perform above chance but not at ceiling (∼70% correct). The values used are consistent with the performance observed in Experiment 1 (see Figure 3). Additionally, all face stimuli had a black box (8.3° × 2.8°) covering either the eyes or the mouth. 
Apparatus
All of the stimuli were created and displayed using MATLAB with the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997) on an Intel Macintosh running OS X 10.6. All stimuli were displayed on a 17-in. ViewSonicE70fB CRT monitor (1024 × 786 resolution, 75 Hz, 28.97 pixels/° of visual angle). The viewing distance was approximately 56 cm. 
Procedure
This experiment consisted of a total of 72 trials presented as six blocks of trials (one block of trials per emotion type) shown in random order, repeated twice for a total of 144 trials per participant. Each block consisted of 12 trials per block consisting of six neutral and six emotional face trials presented in randomized order. At the beginning of each trial, participants saw a fixation cross for 1 s, followed by a briefly presented face for 100 ms. For half of the trials, a black box covered either the eyes or the mouth (see Figure 9). Participants then judged whether the photo depicted a specific emotion or not (i.e., angry or not angry). 
Figure 9
 
For all trials, participants were briefly shown a face with either the eyes or mouth occluded. The task was to judge whether the face had a particular emotion or not. (A) An example of a joyful face with the eyes covered; (B) an example of the same face but with the mouth covered.
Figure 9
 
For all trials, participants were briefly shown a face with either the eyes or mouth occluded. The task was to judge whether the face had a particular emotion or not. (A) An example of a joyful face with the eyes covered; (B) an example of the same face but with the mouth covered.
Results
We examined whether or not overall performance differed according to whether the mouth (i.e., eyes covered by black box) or eyes (i.e., mouth covered by black box) were visible. For joy, we found performance was significantly better when the mouth was visible (M = 63.3%) relative to when it was covered (M = 51.7%), t(9) = 2.81, p = 0.02. A similar pattern was observed for disgust (M = 75.8%, M = 60.1%), t(9) = 3.38, p < 0.01. Conversely, we found performance was significantly better for anger when the eyes were visible (M = 69.2%) compared to when they were covered (M = 60.8%), t(9) = 3.35, p < 0.01. A similar trend was observed for shame (M = 81.7%, M = 68.3%), t(9) = 1.81, p = 0.10. We failed to find any significant differences for fear (M = 65%, M = 67.5%) and sadness (M = 71.7%, M = 71.7%). 
Discussion
Emotion judgment performance depended on which regions of the face were covered in a manner consistent with the fixation patterns found in Experiment 1. Performance was best for joy and disgust when the eyes were covered and the mouth was visible, consistent with the eye fixation rates we observed in Experiment 1 in which observers tended to fixate more at the upper lip and less at the eyes relative to other emotions. Figure 9 provides a strong illustration of this effect: Joy is easy to detect in the left face but harder to detect in the right face. We failed to find any meaningful differences for fearful faces, which is consistent with the diagnostic regions identified in Experiment 1; the eyes and upper lip are not differentially important to identify fear. For anger, performance was significantly better when the eyes were visible compared to when they were covered. A similar trend was also observed for shameful faces. Both of these effects are consistent with the diagnostic areas identified in Experiment 1. We did not see any significant differences for detecting sadness when covering the mouth versus the eyes even though Experiment 1 did suggest different levels of diagnosticity for those regions. Overall, five out of the six emotion types showed results consistent with the predictions given the diagnostic areas of emotional faces identified in Experiment 1
General discussion
Perceiving emotion expressed in a face is associated with characteristic patterns of attention, both when that emotion is visible and when it is merely anticipated. Although previous research has segmented parts of the face into three or four distinct regions (see Aviezer et al., 2008; Buchan, Paré, & Munhall, 2007; Everdell et al., 2007; Henderson, Falk, Minut, Dyer, & Mahadevan, 2000), the present study offers a more detailed analysis using 21 defined regions. Our results show that, of these regions, the five main facial regions (eyes, upper nose, lower nose, upper lip, nasion) accounted for 88.03% of all fixations; any other facial region accounted for at most 3% of fixations. These patterns suggest that these five facial regions may be the most critical for emotional recognition within faces. 
Despite the dominance of these five regions, there were consistent differences in fixation patterns when seeking different emotional cues within a face. Figure 5 summarizes the regions that were highly fixated for each emotion (relative to the other emotions) and offers a comparison of these patterns to past work exploring which regions are most diagnostic for evaluating a given emotion (see Figure 5; Smith et al., 2005). 
  •  
    For joy, participants fixated the least at the eyes and spent the most time fixating at the upper lips. This is likely driven by the importance of the upper lip relative to the smile, the most salient facial feature of joy (see Figure 5). Importantly, these fixation differences were significant across emotional and neutral stimuli, suggesting a particularly strong goal-driven strategy for perceiving joy. For emotional faces, there also appeared to be a marginal difference in which the upper nose was fixated less frequently. These differences were primarily driven by the first few fixations.
  •  
    For disgust, participants fixated more often at the upper lip and less often at the eyes. These differences are likely due to the importance of the furrowing of the nose and mouth when making the disgust facial expression, thus making the upper lip more salient. These patterns were primarily driven by the first few fixations.
  •  
    For fear, participants fixated more at the eyes and relatively less at the nasion (see Figure 5). There did not appear to be any significant differences relative to other emotions in fixations at the upper nose, lower nose, and upper lip. These patterns were fairly consistent across time for the nasion although the eyes became increasingly fixated over time.
  •  
    For anger, participants fixated most at the eyes and least at the upper lip. When emotion was not present, participants fixated significantly more at the nasion as well. There did not appear to be any significant fixation differences at the upper and lower nose (see Figure 5). These results are consistent with previous findings suggesting the diagnostic value of the eyes and nasion for detecting anger (Smith et al., 2005). These patterns were consistent across time.
  •  
    For sadness, participants fixated most at the eyes. In contrast, fixation time was significantly lower at the upper lip. These fixation patterns were significant across both emotional and neutral stimuli for both the eyes and upper lip, suggesting this pattern may be driven by a goal-driven strategy. These patterns were consistent across time.
  •  
    For shame, participants fixated to a greater extent at the eyes. This pattern was most pronounced after the first fixation. For all other regions, there did not appear to be any significant differences (see Figure 5).
Across all blocks, it is interesting that there were no significant differences found in fixation time spent at the lower nose, whether for emotional or neutral faces; it appears to serve as a central fixation point across a face. Another finding in our data is that dynamic differences are apparent even within the first fixation, for example, in the much greater percentage of fixations to the eye region for anger relative to joyful expressions. This provides further support for a goal-driven strategy during emotional recognition of faces because such differences were typically maintained across emotional and neutral stimuli. The values and differences within many regions across emotion type were larger and more pronounced while viewing emotional versus neutral stimuli, still suggesting an important role for stimulus-driven factors on eye movements. 
Gaze patterns were particularly variable across emotions in the first few fixations and became less variable for the remainder of the viewing time. The upper nose was fixated to the greatest extent as the first fixation regardless of emotion type, and then fixation rates remained relatively variable depending on the emotion type for the remainder of the trial. In contrast, for the lower nose, the first fixation demonstrated variability across different emotion types but was a reliable target at the second fixation regardless of emotion. For all emotions, the upper lip was consistently a weaker target at the first fixation and fixated most during the second fixation. There was no general fixation pattern at the nasion, demonstrating wide variability across emotions over time. We also observed a significant left dominance for the initial fixation, regardless of stimulus type. 
Although participants made an average of 8.3 fixations per trial, the prediction rates for the Bayesian classifier showed that the first two fixation locations were most predictive of the type of emotion sought by the observer. This result is consistent with previous work showing that face recognition can be completed after only one to two eye movements (Hsiao & Cottrell, 2008) although the number of fixations needed to recognize emotional content may be greater. 
Building on the diagnostic regions of the emotional faces identified in Experiment 1, Experiment 2 offered a direct evaluation to whether those regions were important by covering either the mouth or the eyes during an emotional judgment task. Consistent with the predictions of Experiment 1, emotional judgment performance was impaired for emotions for which the mouth was more diagnostic (joy, disgust) as well as for emotions for which the eyes were more diagnostic (anger, shame). Although the present data cannot directly confirm that eye movements to these more diagnostic regions helped performance, they do show that attentional strategies were aligned with locations that carry diagnostic information. 
Conclusion
When asked to evaluate the presence of a specific emotion within a face, participants focused on a common set of face regions but also used emotion-specific eye-movement strategies in both emotional and neutral faces. These results are consistent with the idea that focusing attention on certain diagnostic regions is beneficial for emotion processing and that these strategies may be driven by both stimulus-driven and goal-driven factors. The evidence for goal-driven strategies is consistent with previous research suggesting that the goals and characteristics of the perceiver may influence how eye movements are deployed during facial emotion recognition (Jack et al., 2009; Perlman et al., 2009). Face and emotion recognition are even more broadly affected by a face's gender (Wright & Sladden, 2003), race or culture (Jack et al., 2009; O'Toole et al., 1994), and individual facial morphology (Oosterhof & Todorov, 2009). These effects likely interact with the goal-driven factors observed in the current study, and we hope that future research explores such interactions. 
The presence of a goal-driven strategy during emotional recognition has numerous practical implications, specifically to individuals with disorders that may result in social deficits. Patients with certain disorders, such as autism (Pelphrey et al., 2002; Spezio, Adolphs, et al., 2007; Spezio, Huang, Castelli, & Adolphs, 2007), schizophrenia (Loughland, Williams, & Gordon, 2002), social phobias (Horley, Williams, Gonsalvez, & Gordon, 2004) and Alzheimer's disease (Hargrave, Maddock, & Stone, 2002; Ogrocki, Hills, & Strauss, 2000), demonstrate eye-movement patterns that are remarkably different from normal participants, and these anomalous patterns may impair their ability to correctly identify the emotional valence of faces. These individuals typically focus longer at nondiagnostic regions (such as the brow or cheeks) relative to diagnostic regions of the face (eyes, mouth, nose) during emotional recognition. These abnormal eye-movement patterns contribute to social or emotional deficits that accompany abnormal face recognition. By comparing such abnormal eye-movement patterns to our results, it may be possible to identify ways in which altered eye-movement patterns could improve emotion recognition. 
Acknowledgments
We thank Trixie Lipke for assistance with data collection and Sumeeth Jonathan for programming and help with movie creation. 
Commercial relationships: none. 
Corresponding author: Mark W. Schurgin. 
Email: maschurgin@jhu.edu. 
Address: Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA. 
References
Adolphs R. Gosselin F. Buchanan T. W. Tranel D. Schyns P. Damasio A. R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature, 433 (7021), 68–72. [CrossRef] [PubMed]
Adolphs R. Tranel D. Damasio H. Damasio A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669–672. [CrossRef] [PubMed]
Aviezer H. Hassin R. R. Ryan J. Grady C. Susskind J. Anderson A. Bentin S. (2008). Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychological Science, 19 (7), 724–732. [CrossRef] [PubMed]
Baron-Cohen S. Wheelwright S. (2004). The empathy quotient: An investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. Journal of Autism and Developmental Disorders, 34 (2), 163–175. [CrossRef] [PubMed]
Beaupre M. G. Hess U. (2005). Cross-cultural emotion recognition among Canadian ethnic groups. Journal of Cross-Cultural Psychology, 36 (3), 355–370. [CrossRef]
Blair R. J. R. (2005). Responding to the emotions of others: Dissociating forms of empathy through the study of typical and psychiatric populations. Consciousness and Cognition, 14 (4), 698–718. [CrossRef] [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 443–446. [CrossRef] [PubMed]
Buchan J. N. Paré M. Munhall K. G. (2007). Spatial statistics of gaze fixations during dynamic face processing. Social Neuroscience, 2 (1), 1–13. [CrossRef] [PubMed]
Darwin C. (1872). The expression of emotions in man and animals. New York: Philosophical Library.
Ekman P. Friesen W. V. (1975). Unmasking the face. Englewood Cliffs, NJ: Prentice Hall.
Ekman P. Friesen W. V. (1978). The Facial Action Coding System (FACS): A technique for the measurement of facial action. Palo Alto, CA: Consulting Psychologists Press.
Elfenbein H. A. Ambady N. (2002). Is there an in-group advantage in emotion recognition?.
Everdell I. T. Marsh H. Yurick M. D. Munhall K. G. Paré M. (2007). Gaze behaviour in audiovisual speech perception: Asymmetrical distribution of face-directed fixations. Perception, 36, 1535–1545. [CrossRef] [PubMed]
Gosselin F. Schyns P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41 (17), 2261–2271. [CrossRef] [PubMed]
Hargrave R. Maddock R. J. Stone V. (2002). Impaired recognition of facial expressions of emotion in Alzheimer's disease. Journal of Neuropsychiatry and Clinical Neurosciences, 14, 64–71. [CrossRef] [PubMed]
Hejmadi A. Davidson R. J. Rozin P. (2000). Exploring Hindu Indian emotion expressions: Evidence for accurate recognition by Americans and Indians. Psychological Science, 11 (3), 183–187. [CrossRef] [PubMed]
Henderson J. M. Falk R. Minut S. Dyer F. C. Mahadevan S. (2000). Gaze control for face learning and recognition by humans and machines. Michigan State University Eye Movement Laboratory Technical Report, 4, 1–14.
Henderson J. M. Williams C. C. Falk R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33 (1), 98–106. [CrossRef] [PubMed]
Horley K. Williams L. M. Gonsalvez C. Gordon E. (2004). Face to face: Visual scanpath evidence for abnormal processing of facial expressions in social phobia. Psychiatry Research, 127, 43–53. [CrossRef] [PubMed]
Hsiao J. H. W. Cottrell G. (2008). Two fixations suffice in face recognition. Psychological Science, 19 (10), 998–1006. [CrossRef] [PubMed]
Jack R. E. Blais C. Scheepers C. Schyns P. G. Caldara R. (2009). Cultural confusions show that facial expressions are not universal. Current Biology, 19, 1–6. [CrossRef] [PubMed]
Keltner D. Buswell B. N. (1997). Embarrassment: Its distinct form and appeasement functions. Psychological Bulletin, 122 (3), 250–270. [CrossRef] [PubMed]
Kowler E. Anderson E. Dosher B. Blaser E. (1995). The role of attention in the programming of saccades. Vision Research, 35 (13), 1897–1916. [CrossRef] [PubMed]
Loughland C. M. Williams L. M. Gordon E. (2002). Schizophrenia and affective disorder show different visual scanning behaviour for faces: A trait versus state-based distinction? Biological Psychiatry, 52, 338–348. [CrossRef] [PubMed]
Marsh A. A. Kozak M. N. Ambady N. (2007). Accurate identification of fear facial expressions predicts prosocial behavior. Emotion, 7 (2), 239. [CrossRef] [PubMed]
Nahm F. K. D. Perret A. Amaral D. G. Albright T. D. (1997). How do monkeys look at faces? Journal of Cognitive Neuroscience, 9 (5), 611–623. [CrossRef] [PubMed]
Ogrocki P. K. Hills A. C. Strauss M. E. (2000). Visual exploration of facial emotion by healthy older adults and patients with Alzheimer disease. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 13, 271–278. [PubMed]
Oosterhof N. N. Todorov A. (2009). Shared perceptual basis of emotional expressions and trustworthiness impressions from faces. Emotion, 9 (1), 128. [CrossRef] [PubMed]
O'Toole A. J. Deffenbacher K. A. Valentin D. Abdi H. (1994). Structural aspects of face recognition and the other-race effect. Memory & Cognition, 22 (2), 208–224. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Pelphrey K. A. Sasson N. J. Reznick J. Paul G. Goldman B. D. Piven J. (2002). Visual scanning of faces in autism. Journal of Autism and Developmental Disorders, 32 (4), 1573–3432.
Perlman S. B. Morris J. P. Vander Wyk B. C. Green S. R. Doyle J. L. Pelphrey K. A. (2009). Individual differences in personality shape how people look at faces. PLoS ONE, 4 (6), e5952. [CrossRef] [PubMed]
Pflugshaupt T. Mosimann U. P. Schmitt W. J. von Wartburg R. Wurtz P. Luthi M. Müri R. M. (2007). To look or not to look at threat? Scanpath differences within a group of spider phobics. Journal of Anxiety Disorders, 21 (3), 353–366. [CrossRef] [PubMed]
Sasson N. Tsuchiya N. Hurley R. Couture S. M. Penn D. L. Adolphs R. Piven J. (2007). Orienting to social stimuli differentiates social cognitive impairment in autism and schizophrenia. Neuropsychologia, 45, 2580–2588. [CrossRef] [PubMed]
Schyns P. G. Petro L. S. Smith M. L. (2009). Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: Behavioral and brain evidence. PLoS One, 4 (5), e5625. [CrossRef] [PubMed]
Segerstrom S. C. (2001). Optimism and attentional bias for negative and positive stimuli. Personality and Social Psychology Bulletin, 27, 1334–1343. [CrossRef]
Smith M. L. Cottrell G. W. Gosselin F. Schyns P. G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16 (3), 184–189. [CrossRef] [PubMed]
Spezio M. L. Adolphs R. Hurley R. S. E. Piven J. (2007). Analysis of face gaze in autism using “Bubbles.” Neuropsychologia, 45, 144–151. [CrossRef] [PubMed]
Spezio M. L. Huang P.-Y. S. Castelli F. Adolphs R. (2007). Amygdala damage impairs eye contact during conversations with real people. The Journal of Neuroscience, 27 (15), 3994–3997. [CrossRef] [PubMed]
Susskind J. M. Lee D. H. Cusi A. Feiman R. Grabski W. Anderson A. K. (2008). Expressing fear enhances sensory acquisition. Nature Neuroscience, 11 (7), 843–850. [CrossRef] [PubMed]
Tracy J. L. Robins R. W. (2004). Show your pride: Evidence for a discrete emotion expression. Psychological Science, 15, 194–197. [CrossRef] [PubMed]
Walker-Smith G. J. Gale A. G. Findlay J. M. (1977). Eye movement strategies involved in face perception. Perception, 6 (3), 313–326. [CrossRef] [PubMed]
Wright D. B. Sladden B. (2003). An own gender bias and the importance of hair in face recognition. Acta Psychologica, 114 (1), 101–114. [CrossRef] [PubMed]
Zhao W. Chellappa R. Phillips P. J. Rosenfeld A. (2003). Face recognition: A literature survey. Acm Computing Surveys (CSUR), 35 (4), 399–458. [CrossRef]
Figure 1
 
Illustration of face stimuli (A) and experimental paradigm (B). Participants were presented with blocks of trials containing half neutral faces (0% intensity) and half emotional faces (varying from 20%–60% intensity) and were asked to judge whether each face had any amount of a particular emotion present (e.g., fear).
Figure 1
 
Illustration of face stimuli (A) and experimental paradigm (B). Participants were presented with blocks of trials containing half neutral faces (0% intensity) and half emotional faces (varying from 20%–60% intensity) and were asked to judge whether each face had any amount of a particular emotion present (e.g., fear).
Figure 2
 
Illustration of 21 facial regions of interest (ROIs). We identified five main facial ROIs—eyes (green), upper nose (blue), lower nose (orange), upper lip (red), and nasion (purple)—that accounted for more than 88% of all fixations.
Figure 2
 
Illustration of 21 facial regions of interest (ROIs). We identified five main facial ROIs—eyes (green), upper nose (blue), lower nose (orange), upper lip (red), and nasion (purple)—that accounted for more than 88% of all fixations.
Figure 3
 
Percentage of trials rated as “emotional” as a function of emotion type and intensity. These data demonstrate the varying difficulty of the task, which was meant to encourage selection of the most diagnostic regions of the face. Across all emotions, performance did not reach above chance until 40% intensity and was near ceiling for 60% intensity images.
Figure 3
 
Percentage of trials rated as “emotional” as a function of emotion type and intensity. These data demonstrate the varying difficulty of the task, which was meant to encourage selection of the most diagnostic regions of the face. Across all emotions, performance did not reach above chance until 40% intensity and was near ceiling for 60% intensity images.
Figure 4
 
(A) For each emotional face, fixation time spent within each main ROI. (B) For each neutral face, fixation time spent within each main ROI. * Designates t test p < 0.05 relative to the mean for each emotion.
Figure 4
 
(A) For each emotional face, fixation time spent within each main ROI. (B) For each neutral face, fixation time spent within each main ROI. * Designates t test p < 0.05 relative to the mean for each emotion.
Figure 5
 
Top row: Image subtractions between the emotional and neutral versions of a particular face from the image set used in Experiment 1, revealing areas of maximal diagnostic information (white) for judging presence of emotion in the face. Middle row: Emotional fixation rates for each of the top five regions across the six emotion conditions, relative to the average rate of region fixation collapsed across all emotional face trials. Bottom row: Diagnostic information for judgment of each emotion type using the “Bubbles” technique (adapted with permission from Smith et al., 2005).
Figure 5
 
Top row: Image subtractions between the emotional and neutral versions of a particular face from the image set used in Experiment 1, revealing areas of maximal diagnostic information (white) for judging presence of emotion in the face. Middle row: Emotional fixation rates for each of the top five regions across the six emotion conditions, relative to the average rate of region fixation collapsed across all emotional face trials. Bottom row: Diagnostic information for judgment of each emotion type using the “Bubbles” technique (adapted with permission from Smith et al., 2005).
Figure 6
 
Fixation distributions for the first three fixations on emotional (20%–60%) face image trials for one face identity. As participants made more fixations, distinct patterns emerged across emotions in what regions were fixated most. Specifically, at the first fixation, the upper nose was a particular popular fixation location although there was still considerable variability at the region across emotions. As participants made additional fixations, a general pattern emerged such that there was increased fixation at the upper lip for joy and disgust whereas there was increased fixation at the eyes for anger, sadness, and shame.
Figure 6
 
Fixation distributions for the first three fixations on emotional (20%–60%) face image trials for one face identity. As participants made more fixations, distinct patterns emerged across emotions in what regions were fixated most. Specifically, at the first fixation, the upper nose was a particular popular fixation location although there was still considerable variability at the region across emotions. As participants made additional fixations, a general pattern emerged such that there was increased fixation at the upper lip for joy and disgust whereas there was increased fixation at the eyes for anger, sadness, and shame.
Figure 7
 
Classification accuracy for each condition (emotional face trials, neutral face trials, both trial types, and chance prediction). Classification accuracy for emotional trials was highest, consistent with the stronger fixation patterns observed in our other analyses. Neutral trial classification accuracy was lower but still well above chance. Consistent with the observed fixation patterns of humans, the diagnostic value of fixation data was primarily reached in the first two fixations.
Figure 7
 
Classification accuracy for each condition (emotional face trials, neutral face trials, both trial types, and chance prediction). Classification accuracy for emotional trials was highest, consistent with the stronger fixation patterns observed in our other analyses. Neutral trial classification accuracy was lower but still well above chance. Consistent with the observed fixation patterns of humans, the diagnostic value of fixation data was primarily reached in the first two fixations.
Figure 8
 
Confusion matrices for the Bayesian classifier, including information from the first eight fixations from trials with photographs showing the highest emotion level (left column) and trials with neutral emotion photographs (right column). The top row contains full confusion matrices with bold type indicating significant differences from expectations of chance (16.67%), conservatively Bonferroni corrected for 36 comparisons. To facilitate visual inspection, cell values are redundantly color coded for magnitude within each table. The bottom row removes bold formatting and folds each table along its identity diagonal to depict mutual confusability regardless of whether a pair of classifications is actual and predicted or predicted and actual, respectively. Boxed values are highlighted and discussed in the text.
Figure 8
 
Confusion matrices for the Bayesian classifier, including information from the first eight fixations from trials with photographs showing the highest emotion level (left column) and trials with neutral emotion photographs (right column). The top row contains full confusion matrices with bold type indicating significant differences from expectations of chance (16.67%), conservatively Bonferroni corrected for 36 comparisons. To facilitate visual inspection, cell values are redundantly color coded for magnitude within each table. The bottom row removes bold formatting and folds each table along its identity diagonal to depict mutual confusability regardless of whether a pair of classifications is actual and predicted or predicted and actual, respectively. Boxed values are highlighted and discussed in the text.
Figure 9
 
For all trials, participants were briefly shown a face with either the eyes or mouth occluded. The task was to judge whether the face had a particular emotion or not. (A) An example of a joyful face with the eyes covered; (B) an example of the same face but with the mouth covered.
Figure 9
 
For all trials, participants were briefly shown a face with either the eyes or mouth occluded. The task was to judge whether the face had a particular emotion or not. (A) An example of a joyful face with the eyes covered; (B) an example of the same face but with the mouth covered.
Supplementary Material
Supplementary Figure S1
Supplementary Figure S2
Supplementary Video S1
Supplementary Video S2
Supplementary Video S3
Supplementary Video S4
Supplementary Video S5
Supplementary Video S6
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×