Free
Research Article  |   March 2009
Visual scanning in the recognition of facial affect: Is there an observer sex difference?
Author Affiliations
Journal of Vision March 2009, Vol.9, 11. doi:10.1167/9.3.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Suzane Vassallo, Sian L. Cooper, Jacinta M. Douglas; Visual scanning in the recognition of facial affect: Is there an observer sex difference?. Journal of Vision 2009;9(3):11. doi: 10.1167/9.3.11.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

This investigation assessed whether differences exist in the way males and females overtly orient their visual attention to salient facial features while viewing static emotional facial expressions. Eye movements were recorded while fifty healthy participants (23 males, 27 females) viewed a series of six universal facial expressions. Groups were compared with respect to accuracy and reaction time in emotional labeling. The number and duration of foveal fixations to four predefined facial areas of interest (AOIs)—each eye, nose, mouth—were also recorded. There were no significant group differences with respect to accuracy ( p = 0.997), though females were significantly faster than males in correctly identifying expressions ( p = 0.047). Analysis of the visual scan path revealed that while both groups spent more time and looked more frequently at the eye region, males spent significantly more time viewing the nose and mouth. The duration and number of fixations made to the nose were significantly greater in males ( p < 0.05). This study is the first to show reaction time differences between the sexes across a range of universal emotions. Further, this is the first work to suggest the orienting of attention to the lower part of the face, especially the nose, appears to differentiate the sexes.

Introduction
The ability to interpret emotion through facial expression plays an important role in interpersonal interactions. Evidence from several clinical groups indicates that reduced accuracy in decoding facial affect is associated with impaired social competence (Bornhofen & McDonald, 2008). For example, adults with TBI who perform poorly at interpreting facial expression demonstrate impaired social communication skills (Watts & Douglas, 2006) and reduced social participation (Knox & Douglas, 2009). Similar associations have been reported in the schizophrenia literature (Hooker & Park, 2002). In the normal adult population, the majority of research findings have supported the notion that females outperform males in their ability to accurately decode emotional facial expressions regardless of whether the viewed faces are presented in static (Hall & Matsumoto, 2004; Kirouac & Doré, 1985; Nowicki & Hartigan, 1988; Scholten, Aleman, Montagne, & Kahn, 2005; Thayer & Johnsen, 2000) or dynamic form (Biele & Grabowska, 2006; Montagne, Kessels, Frigerio, de Haan, & Perrett, 2005). However, a definitive female advantage has not been demonstrated across all investigations and all emotions. Further, there has been little or no investigation of the role that visual scanning may play in the sex differences that have been reported. Clarification of the potential contribution of visual attention to sex differences in ability to interpret facial expression will assist conceptualization of this important process. 
Universal expressions of emotion include anger, contempt, sadness, happiness, surprise, disgust, and fear/anxiety (Matsumoto & Ekman, 2004). While some researchers have reported that females demonstrate greater accuracy overall in decoding facial affect (Nowicki & Hartigan, 1988), others have found that sex differences only exist for certain emotional expressions. For example, Mandal and Palchoudhury (1985) found that females were highly accurate in their perception of sad faces, while males were better at recognizing anger. This trend has also been noted in other studies (Biele & Grabowska, 2006; Montagne et al., 2005; Nowicki & Hartigan, 1988; Rotter & Rotter, 1988; Wagner, MacDonald, & Manstead, 1986), while Goos and Silverman (2002) reported that females were more sensitive to angry and sad expressions specifically when the poser was female. 
Not all published research results however support the notion that females are superior (i.e., more accurate) in facial affect recognition than males. Kirouac and Doré (1984) found no difference in accuracy when faces were presented at a fixed duration ranging from 10 to 50 milliseconds (ms). Similarly, Grimshaw, Bulman-Fleming, and Ngo (2004) reported no sex differences for sad, happy, fearful, and angry stimuli when presented for 50 ms. Others (Calvo & Lundqvist, 2008; Campbell et al., 2002; Palermo & Coltheart, 2004; Rahman, Wilson, & Abrahams, 2004; Sullivan, Ruffman, & Hutton, 2007) have instead used a self-paced stimulus presentation rather than a fixed stimulus exposure and have shown no sex accuracy difference to emotional static faces. Sexual orientation of the receiver (i.e., homosexuality versus heterosexuality) has not been found to affect accuracy scores (Rahman et al., 2004). 
Rahman et al. (2004) reported no overall sex differences in accuracy in their large sample of participants (N = 240; 120 males) but noted a sex difference in the speed of identifying the emotion. Females were significantly faster than males at identifying happy, sad, and neutral faces, though these were the only expressions used in the study. In the same year, Palermo and Coltheart (2004) created a large normative database of information about facial affect processing, when they repeatedly showed seven different universal facial expressions of emotion (N = 336 stimuli) to 12 males and 12 females. There were no significant differences in relation to observer gender (or stimulus gender) when reaction time was recorded using a voice key. Although others (Calvo & Lundqvist, 2008) have also collected reaction time data for facial affect interpretation, they have not analyzed the findings specifically to explore gender differences. 
Aside from Palermo and Coltheart (2004) and Rahman et al. (2004), who have assessed response times as well as accuracy across emotional expressions, the literature has for the most part relied on accuracy as the variable by which the sexes are compared in relation to interpreting facial affect. Other measures are lacking. For example, no study has investigated whether a sex difference exists at the feature-extraction level when emotional faces are processed. That is, eye movements during stimulus viewing have not been recorded to determine whether males and females orient their attention differently when viewing facial affect. Miyahira, Morita, Yamaguchi, Morita, and Maeda (2000) and Miyahira, Morita, Yamaguchi, Nonaka, and Maeda (2000) found sex differences in exploratory eye movements made to different schematic pictures but did not assess pictures of facial affect. Saccadic eye movements shift foveal fixation to enable high acuity processing at a point of interest, such as the eyes or mouth on a face, where important nonverbal information can be extracted. Eye movements are a direct measure of overt orienting of visual attention (Posner, 1980) and can be assessed in a non-invasive way so as to not disrupt normal viewing. The manner in which a face is viewed can be assessed in relation to the frequency and duration of fixation to an area of interest. 
The aim of this study was to explore sex differences across a range of universal emotions by recording and analyzing a range of output measures. Differences between the groups were explored with respect to (i) accuracy in the labeling of the static emotional facial expression, (ii) time taken to respond to each correctly labeled emotional expression (i.e., reaction time), and (iii) the duration and (iv) number of ocular fixations to predetermined salient facial features. It was hypothesized that females would be more accurate than males in their decoding of facial affect. Given recent findings (Rahman et al., 2004), it was also hypothesized that females would respond faster than males across a range of emotions when correctly identifying the facial expression. Further, it was postulated that sex differences in accuracy and/or reaction time could be underpinned by differences in the way that males and females attend to salient facial features. 
Methods
Participants
Fifty participants took part in this study (23 males, 27 females). The groups did not differ significantly with respect to years of education ( t = −2.012, p = 0.06), years living in Australia ( t = 0.388, p = 0.701), or handedness ( t = −0.067, p = 0.947). All participants were moderately to extremely right-handed when assessed using the Bryden Handedness scale (Bryden, 1977). Males were significantly older compared with their female counterparts by a mean of 2.72 years (t = −2.135, p = 0.04; Table 1). It has been noted that accuracy in emotional facial recognition worsens and visual scanning style alters beyond the age of 61 years (Sullivan et al., 2007). It was therefore decided that such a small, although significant, difference in age between the groups in the present study would not have an impact upon the variables being assessed. Participants were mostly undergraduate students and were recruited via in-class presentations or via word-of-mouth referral. All participants had no less than 6/12 vision (corrected or uncorrected) in either eye and were free from neurological disorder. None had a first-degree biological relative with schizophrenia (Loughland, Williams, & Harris, 2004). 
Table 1
 
Participant characteristics
Table 1
 
Participant characteristics
Age, years # Education, years Years living in Australia Handedness*
Male ( n = 23) 25.57 (5.45) 18.30 (4.34) 21.91 (7.98) 0.87 (0.42)
Female ( n = 27) 22.85 (2.96) 16.41 (1.37) 22.59 (2.81) 0.87 (0.35)
 

Note: Values provided as mean ( SD).

 

# p = 0.04.

 

*−1.00 is extremely left-handed, +1.00 is extremely right-handed (Bryden, 1977).

Stimuli
Pictures of facial expressions were taken from the set of Japanese and Caucasian Facial Expressions of Emotion (JACFEE; Matsumoto & Ekman, 2004). The selected faces displayed a happy, sad, angry, disgusted, anxious/fearful, or surprised expression. While the entire stimulus set contains 56 color photos, only 3 items with inter-rater reliability greater than 75% (Biehl et al., 1997) were selected from the set for each emotion. The resultant stimulus set comprised a total of 18 pictures from 9 different male and 9 different female posers. Therefore, to prevent habituation effects, every emotional expression was made by a different person. Each stimulus was resized to a 1024 × 714 pixel jpeg file using Adobe Photoshop Elements (version 2.0) to enable their display on the eye tracker monitor. 
Apparatus
The Tobii 1750 binocular infrared eye tracker (Tobii Technology, Stockholm, Sweden) recorded eye position and response times. A double monitor, double computer (HP Compaq Pentium 4) configuration was used. Participants viewed each stimulus on an integrated 1280 × 1024 pixel TFT eye tracker monitor while the examiner was positioned before the second monitor to ensure accurate data were acquired. Participants were seated 60 centimeters (cm) from the eye tracker monitor so that the visual angle of the stimulus appearing on the screen was 32° × 24° (W × H). Calibration for all participants was completed using a 16-point reference grid (see Dyer, Found, & Rogers, 2006 for picture) prior to viewing the facial set. Calibration required participants to follow a bouncing ball, which paused at 16 unpredictable positions in a 4 × 4 configuration. The Tobii 1750 is accurate to within 0.5° of visual angle when head movement is minimal (Tobii Technology, 2006). 
ClearView 2.5.1 software was used to collect response time data, as well as the number and duration of fixations to predefined areas of interest (AOIs). The AOIs were manually determined following qualitative inspection of the visual scan path. As participants predominantly directed their attention to the eyes, nose, and mouth for each stimulus, these were the AOIs from where data were collected. The size of the AOI for the eyes (poser's right eye and left eye) and nose were set constant for all 18 faces (visual angle: 4.2° × 2.4°). The eye AOI also included the eyebrows. The mouth AOI varied according to the emotion displayed. For example, the size of the mouth for a happy face was wider than the mouth for a surprised face, the latter being more rounded in shape. As a result, the size of the mouth AOI differed for the happy (5° × 2.6°) and surprised (3.8° × 3.5°) faces but remained constant (4.8° × 2.3°) for angry, sad, anxious, and disgusted expressions. None of the AOIs overlapped. Subsequent analysis was undertaken by exporting data from ClearView to custom written software. 
Procedure
All testing took place in the Eye Movement Laboratory in the Department of Clinical Vision Sciences, La Trobe University. The examiner read the participant instructions from a script to ensure consistency. Following calibration, a brief example of the testing procedure was provided using three JACFEE faces, which were not used for data collection. This procedure ensured participant familiarity with the testing setup because, once the test had begun, the eye tracker could not be paused. Participants wearing contact lenses or glasses were permitted to wear them during testing as this does not interfere with eye tracking (Tobii Technology, 2006). 
The test procedure is shown in Figure 1. Participants pressed the spacebar on the computer keyboard in front of the eye tracker monitor to start the test. They were then presented with a black asterisk (which subtended a visual angle of ±0.75° at 60 cm) on a white background and instructed to fixate it prior to pressing the spacebar to advance to the next screen. The purpose of the asterisk was to ensure that all participants viewed the face from the same starting point (cf. Green, Uhlhaas, & Coltheart, 2005; Horley, Williams, Gonsalvez, & Gordon, 2004). The next stimulus displayed was an emotional facial expression. Participants were instructed to work as quickly as possible while taking the time they needed to accurately label the emotional expression. When they had decided the emotion shown, they pressed the spacebar and read aloud the emotion from the labels presented on the monitor. The emotion was recorded by the examiner on a recording sheet. The test continued until all 18 faces had been viewed and the participant then pressed the spacebar to finish the stimulus sequence. Each type of emotion was presented 3 times in random order and the stimulus sequence was not counterbalanced between subjects. On average, the total testing time did not exceed 5 minutes per participant. At the completion of testing, all subjects were thanked for their time and debriefed. They were shown examples of their eye tracking and informed about their accuracy. All procedures were approved by the Faculty of Health Sciences Human Ethics Committee, La Trobe University, and all procedures were completed in accordance with the Helsinki Declaration. 
Figure 1
 
Schematic representation of stimulus sequence.
Figure 1
 
Schematic representation of stimulus sequence.
Statistical analysis
Group differences were analyzed with respect to accuracy in labeling the facial expression, response time, and number and duration of fixation to the AOIs. Response time was the time taken to recognize the emotion being shown. The time (in ms) was taken from the initial appearance of the facial stimulus to the spacebar press to end it and was automatically recorded by the ClearView software. A fixation was regarded as an eye position remaining within a 50-pixel area for 100 ms or longer (Dyer et al., 2006). 
The number and duration of fixations reflects the time spent processing visual information at an AOI (Noton & Stark, 1971). Visual information is acquired during fixation, when visual processing can be performed with high acuity by the fovea. No visual information is acquired during a saccade (Leigh & Zee, 1999). Taken together, these temporal parameters (i.e., the number and fixation duration at each AOI) provide a sensitive measure of processing of visual information and were thus used in the analysis. These measures have also been used by others in the analysis of visual scanning to faces (Henderson, Williams, & Falk, 2005; Loughland, Williams, & Gordon, 2002a; Pelphrey et al., 2002). 
A 2 × 6 mixed ANOVA with sex (male, female) as the between-subjects factor and emotion (6 emotions) as the within-subjects factor was used to test for differences in accuracy scores and response time. A 2 (sex: male, female) × 4 (AOI: right eye, left eye, nose, mouth) mixed ANOVA was performed to determine differences in the orienting of visual attention (duration and number of fixations). Prior to running each analysis, data distributions were checked for the presence of outliers and violations of normality and homogeneity of variance assumptions of parametric analysis. SPSS (version 14.0) was used for statistical analysis and the α level was set at 0.05. 
Results
Accuracy
There was no significant overall sex difference in accuracy ( F(1, 48) = 0.00, p = 0.997; Table 2). A significant main effect of emotion ( F(5, 240) = 29.175, p < 0.001) consistent with a medium effect size (partial η 2 = 0.378) but no significant interaction ( F(5, 240) = 0.322, p = 0.899) was demonstrated. Pairwise comparisons demonstrated that significantly higher accuracy scores were obtained on the surprised expression when compared to all other expressions. In contrast, significantly lower accuracy scores were obtained on the anxious/fearful facial expression when compared with all other emotions. Of the remaining 4 emotions, happy produced significantly higher accuracy scores than all but the surprised expression. No participant labeled all 18 faces correctly, with total correct scores ranging from 12 to 17 for males (mean ± SD: 15.26 ± 1.35) and from 11 to 17 for females (15.26 ± 1.36). 
Table 2
 
Mean ( SD) total correct responses for each emotion. Score out of 3.
Table 2
 
Mean ( SD) total correct responses for each emotion. Score out of 3.
Angry Anxious Sad Disgusted Happy Surprised
Male 2.74 (0.62) 1.74 (0.75) 2.61 (0.66) 2.35 (0.83) 2.87 (0.34) 2.96 (0.21)
Female 2.70 (0.61) 1.67 (0.78) 2.52 (0.64) 2.52 (0.64) 2.85 (0.36) 3.00 (0.00)
Total 2.72 (0.61) 1.70 (0.76) 2.56 (0.64) 2.44 (0.73) 2.86 (0.35) 2.98 (0.14)
Response time
Response times for correctly labeled facial expressions were analyzed ( Figure 2). This analysis revealed a significant main effect of sex ( F(1,48) = 4.149, p = 0.047) indicating that males took longer than females to interpret facial expressions across all emotions. The main effect of sex was consistent with a small effect (partial η 2 = 0.08). Planned comparisons of individual emotions revealed that this sex difference only reached significance on happy expressions ( t = −2.36, p = 0.022), where females were significantly faster than males. Again, a significant main effect of emotion ( F(5, 240) = 16.544, p ≤ 0.001) consistent with a medium effect size (partial η 2 = 0.256) and no interaction effect between the two factors ( F(5, 240) = 0.325, p = 0.897) were demonstrated. Pairwise comparisons demonstrated that, overall, significantly faster response times were recorded for happy facial expressions. Response times for surprised were significantly faster than the other four emotions (angry, anxious, sad, disgusted). The only other significant difference in response time between emotions was obtained on the comparison between sad and anxious with sad yielding significantly faster responses. 
Figure 2
 
Mean response time for each correctly labeled emotion. Error bars represent the SD. Significant main effects of sex and emotion were observed (* p < 0.05; ** p < 0.001).
Figure 2
 
Mean response time for each correctly labeled emotion. Error bars represent the SD. Significant main effects of sex and emotion were observed (* p < 0.05; ** p < 0.001).
Areas of interest (AOIs)
Eye movement data (duration and number of fixations) were analyzed by pooling data across all emotions. This type of analysis was undertaken because we sought to determine whether the gender effect for reaction time was specific to any particular AOI, regardless of the type of emotion shown. Analysis of duration of fixation revealed a significant main effect of sex ( F(1,48) = 6.531, p = 0.015) consistent with a small effect (partial η 2 = 0.117). Planned comparisons demonstrated that males and females did not differ significantly with respect to time spent viewing the right eye (RE; t = −1.081, p = 0.285) or left eye (LE; t = −0.308, p = 0.759). However, males spent a significantly longer time viewing the nose ( t = −2.642, p = 0.011; partial η 2 = 0.127) and mouth ( t = −2.138, p = 0.038; partial η 2 = 0.087; Figure 3A). A significant main effect of AOI was also found ( F(3, 144) = 9.098, p < 0.001; partial η 2 = 0.159), but there was no interaction ( F(3, 144) = 0.724, p = 0.539). Pairwise comparisons revealed that significantly more time was spent fixating on the eyes (both right and left) than on the nose or mouth. There was no significant difference between duration of fixation on the left and right eyes and between duration of fixation on the nose and mouth. 
Figure 3
 
(A) Duration and (B) number of fixations to predefined AOIs with emotion collapsed (* p < 0.05, ^ p = 0.05). RE = stimulus' right eye, LE = stimulus' left eye.
Figure 3
 
(A) Duration and (B) number of fixations to predefined AOIs with emotion collapsed (* p < 0.05, ^ p = 0.05). RE = stimulus' right eye, LE = stimulus' left eye.
Review of the data distribution for the analysis of number of fixations revealed that one participant was an outlier (number of fixations to the LE > 2.5 SDs above the group mean). Consequently, this participant's data was removed for this analysis only. Similar to the findings noted for the duration of fixation, a main effect of AOI was recorded ( F(3, 141) = 16.385, p < 0.001; partial η 2 = 0.259). Pairwise comparisons indicated that significantly more fixations were made to the eyes (both right and left) when compared to the nose or the mouth AOI. There was no significant difference in the number of fixations made to RE when compared with the LE and no difference between the nose and the mouth. In this analysis, the main effect of sex did not reach statistical significance ( F(1, 47) = 2.727, p = 0.105). However, it is noteworthy that in Figure 3B a similar pattern that existed with duration ( Figure 3A) was apparent. Exploratory statistical analyses were performed to evaluate whether there were underlying significant differences in this pattern. Paralleling the results with respect to duration of fixation, males made significantly more fixations to the nose than their female counterparts ( t = −2.593, p = 0.013; partial η 2 = 0.125). The difference between males and females in the number of fixations made to the mouth was not statistically significant ( t = −1.663, p = 0.103), as was the case for number of fixations to the RE ( t = −0.403, p = 0.689) and LE ( t = 0.034, p = 0.973). 
Taken together, the AOI data indicate that the nose and mouth were less frequently fixated upon overall, highlighting that orienting visual attention to the eyes was important in decoding the facial expression shown. There were no differences between males and females with respect to the duration or number of fixations to the upper part of the face (RE and LE). This was not the case for the fixation data from the mouth and in particular the nose. 
Discussion
This study has extended the literature to investigate viewer sex differences across a range of universal facial expressions of emotion and across different parameters of measurement. We found viewer sex did not significantly affect the accuracy of identifying the emotional expressions shown. Instead, the type of emotion appeared to have a stronger influence on accuracy of response with positive emotions (happy, surprised) more accurately identified by both groups than negative ones (sad, angry, disgusted, anxious/fearful). Our results have shown a difference in processing speed between females and males in accurately labeling facial affect across a range of universal emotions. Furthermore, the difference in processing speed between the groups may be explained, at least to some degree, by the significantly greater duration and number of foveal fixations made by males to the lower part of the face, particularly the nose region. While males spend more time viewing each face, they have an attentional affinity for a non-salient facial feature, which may preclude them from being more efficient than their female counterparts. This work is the first to find this difference between the sexes in their visual scanning to facial emotion. 
Our finding that accuracy was not affected by observer sex concurs with the findings of some authors (Calvo & Lundqvist, 2008; Grimshaw et al., 2004; Kirouac & Doré, 1984; Mandal & Palchoudhury, 1985; Palermo & Coltheart, 2004; Rahman et al., 2004), but not others who note that females are significantly more accurate in their recognition of facial affect across a range of emotions (Hall & Matsumoto, 2004; Kirouac & Doré, 1984; Rotter & Rotter, 1988; Thayer & Johnsen, 2000). The reason for this variation in the literature is unclear. It has been suggested that procedural variables influence accuracy more than viewer sex (Grimshaw et al., 2004), though this has been challenged because sex differences are sometimes seen for both brief and long stimulus presentations (Hall & Matsumoto, 2004). Kirouac and Doré (1984), for example, reported no sex difference for fixed stimulus presentation times of 10 to 50 ms, but later found a sex difference when facial stimuli were presented for long durations of 10 s (Kirouac & Doré, 1985). In the present study, it might be argued that emphasis was placed on accuracy as the instruction to the participants was that they take sufficient time to view each face to ensure that they could accurately label it. Indeed, this instruction has been used by others and with similar results (Palermo & Coltheart, 2004). Some researchers (Calvo & Lundqvist, 2008; Rahman et al., 2004) have asked participants to respond as fast as possible when identifying a facial expression in their experiments, yet a gender difference has still not emerged. It might be argued that our findings are limited in that we only used 3 images for each expression. Perhaps the use of a larger stimulus set might have permitted a gender difference in accuracy to emerge, though this has not been shown in other studies where large stimulus sets of 336 (Palermo & Coltheart, 2004) and 280 faces (Calvo & Lundqvist, 2008) have been used. 
The current findings are also in contrast with studies that have shown males tend to be significantly better at recognizing anger than females (Biele & Grabowska, 2006; Mandal & Palchoudhury, 1985; Rotter & Rotter, 1988; Wagner et al., 1986). It has been reasoned that men tend to feel anger more frequently and have a more aggressive social role (Biele & Grabowska, 2006) compared with women who tend to internalize rather than express the emotion (Rotter & Rotter, 1988). In the present study, both groups had higher accuracy scores identifying positive emotions (surprised and happy). Surprised was the most accurate emotion overall. Generally, studies report that participants' responses are most accurate for happy facial expressions (e.g., Grimshaw et al., 2004; Mandal & Palchoudhury, 1985; Montagne et al., 2005), but females have been found to be more accurate for surprised expressions when they are morphed from a neutral face (Montagne et al., 2005). In the present study, both groups were least accurate in identifying anxious expressions. It is well established that anxious or fearful facial expressions are more difficult to identify (Kirouac & Doré, 1985; Montagne et al., 2005) and this difficulty increases with advancing age (Calder et al., 2003). It is unclear why anxious expressions are often the least easily identified in young healthy populations, though its decline with age could be attributed to cortical changes involving the amygdala (Calder et al., 2003). It could be suggested that this emotion is less frequently expressed and/or experienced in everyday interaction, meaning that errors are more likely to occur in identifying it when it is seen. Indeed, there could be variation in the manner in which the emotion is expressed, although all stimuli used in this study were chosen as they had high inter-rater reliability in the normative population originally assessed (Matsumoto & Ekman, 2004). 
Previous researchers have evaluated reaction time between the sexes with mixed results. Rahman et al. (2004) compared males' and females' reaction times to two facial emotions (sad, happy) and a neutral face. Their findings were similar to our own and showed females were significantly faster for all emotions. In contrast, Palermo and Coltheart (2004) found no significant gender differences across a range of universal emotional expressions. The differences in these results are most likely underpinned by variation in the number of participants and the resultant power of the statistical analyses applied in the three studies. Rahman et al. (2004) recruited a large sample of 120 males and 120 females. Although our sample size (23 males, 27 females) was smaller than that of Rahman et al. (2004), it was considerably larger than that of Palermo and Coltheart (2004; 12 males, 12 females). In the present study, depending upon the emotional expression shown, participants took from 2734.6 ± 1035.6 ms (mean ± SD) to process happy expressions to 4512.8 ± 1718.5 ms to identify an anxious face (Figure 2). Emotional faces have however been accurately identified at much shorter presentation times (e.g., 30 ms by Goos & Silverman, 2002). One study (Hall & Matsumoto, 2004) found females were more accurate than males at stimulus presentation times of 0.07, 0.13, and 0.2 s. Those authors suggested that female decoding of facial affect was a more automated mechanism compared to males. This suggestion may explain why females tended to respond faster than males in the present study, though the precise mechanism employed in this automation is unclear. 
The neural regions and networks employed in facial affect recognition are vast, complex, and intertwined. They have been further described in recent years via the use of fMRI, or PET. For example, bilateral frontal and left parietal cortices are activated during the viewing of happy faces (Lee et al., 2002), while the amygdala predominantly subserves negative emotional recognitions (Campbell et al., 2002; see Adolphs, 2002 for review). Sex differences in neural activation during the viewing of facial affect have been noted, especially for negative facial expressions (e.g., sad, fearful). Using chimeric faces, Bourne (2005) examined sex differences in hemispheric dominance to a happy facial expression. With accuracy as the dependent measure, males were found to be more lateralized (i.e., right hemisphere dominant) than females in their processing of positive facial affect. A meta-analysis of neuroimaging by Wager, Phan, Liberzon, and Taylor (2003) has also reiterated the greater lateralization of emotional activity in males versus females. It might be postulated that the greater bilateral hemispheric distribution in females could permit the rapid identification of certain facial expressions of emotion, especially positive facial expressions, the discrimination of which relies on both hemispheres (Adolphs, Jansari, & Tranel, 2001). This could help explain the findings in the present study in relation to females being significantly faster than males, at least in their identification of happy. 
The current results are the first to show that males and females differ significantly in their overt orienting of visual attention to salient features when viewing pictures of facial affect. Males tend to make more fixations and fixate for a longer period at the lower part of the face, especially the nose, when compared with females. Overall, however, the method of extraction of facial information by both groups involved predominantly orienting visual attention to the eyes rather than the nose or mouth. This pattern of visual search is a typical finding in normal healthy populations (e.g., Henderson et al., 2005; vs. Williams, Loughland, Gordon, & Davidson, 1999). The current eye movement data seem to offer some support to the suggestion that females adopt a more Gestalt approach to facial viewing, while males spend more time analyzing each portion of the face in question (Hall & Matsumoto, 2004). It might be argued that males became fixated upon the central fixation asterisk presented prior to viewing the facial stimulus (Figure 1) to account for their increased interest in the nose, though they actually attended more to the eye region overall, so this suggestion does not entirely hold. 
Conclusions
To conclude, it would seem that sex differences reported in relation to the time taken to interpret pictures of facial affect (Rahman et al., 2004) might be more aptly described in relation to the manner in which visual attention is oriented to salient facial features. Also, it now seems that simply relying upon describing sex differences by solely explaining the findings in the light of differences extant in normal cortical processing of acquired visual information is not enough (Bourne, 2005; Campbell et al., 2002; Canli, Desmond, Zhao, & Gabrieli, 2002; Wager et al., 2003). The lack of a sex difference in the present study with respect to accuracy may be because the additional time spent by males is not used wisely, with the nose being one of the least informative areas from which to extract emotional information. It might also be simply because a ceiling effect was achieved in the analysis with only 3 images employed for each emotional expression. 
Patients with certain neurological disorders like schizophrenia (Loughland, Williams, & Gordon, 2002b), autism (Pelphrey et al., 2002 vs. Rutherford & Towns, 2008), and Alzheimer's Disease (Hargrave, Maddock, & Stone, 2002; Ogrocki, Hills, & Strauss, 2000) demonstrate an impaired visual scan path when viewing an emotional facial expression. This impairment encompasses their inability to direct their visual attention to salient facial features such as the eyes, instead of attending to uninformative regions such as the cheeks. Alternatively, a very restricted scan path may be exhibited, accounting too for a lack of appreciation of important non-verbal emotional cues. It should be clear that, while the present findings indicate a gender difference in orienting of attention to the nose on the stimuli shown, the manner in which visual information was extracted appeared normal in its entirety. That is, both groups attended predominantly to salient facial features. Nevertheless these findings should be considered when investigating clinical populations, particularly those in which a substantial sex ratio difference exists (e.g., traumatic brain injury). 
Acknowledgments
The authors acknowledge the assistance of Mr. Shivam Sinha, Masters Student, Department of Computer Science and Engineering, La Trobe University, for developing the off-line software to analyze the ClearView data. SLC performed this experiment in partial fulfillment of the requirements for an Honors degree in Orthoptics at La Trobe University. The authors wish to thank Dr. Larry A. Abel and the 2 anonymous reviewers for their valuable comments. There were no sources of financial support for this work. 
Commercial relationships: none. 
Corresponding author: Suzane Vassallo. 
Email: S.Vassallo@latrobe.edu.au. 
Address: Department of Clinical Vision Sciences, La Trobe University, Bundoora Victoria 3086, Australia. 
References
Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12, 169–177. [PubMed] [CrossRef] [PubMed]
Adolphs, R. Jansari, A. Tranel, D. (2001). Hemispheric perception of emotional valence from facial expressions. Neuropsychology, 15, 516–524. [PubMed] [CrossRef] [PubMed]
Biehl, M. Matsumoto, D. Ekman, P. Hearn, V. Heider, K. Kutoh, T. (1997). Matsumoto and Ekman's Japanese and Caucasian facial expressions of emotion: Reliability data and cross-national differences. Journal of Nonverbal Behavior, 21, 3–21. [CrossRef]
Biele, C. Grabowska, A. (2006). Sex differences in perception of emotion intensity in dynamic and static facial expressions. Experimental Brain Research, 171, 1–6. [PubMed] [Article] [CrossRef] [PubMed]
Bornhofen, C. McDonald, S. (2008). Emotion perception deficits following traumatic brain injury: A review of the evidence and rationale for intervention. Journal of the International Neuropsychological Society, 14, 511–525. [PubMed] [CrossRef] [PubMed]
Bourne, V. J. (2005). Lateralised processing of positive facial emotion: Sex differences in strength of hemispheric dominance. Neuropsychologia, 43, 953–956. [PubMed] [CrossRef] [PubMed]
Bryden, M. P. (1977). Measuring handedness with questionnaires. Neuropsychologia, 15, 617–624. [PubMed] [CrossRef] [PubMed]
Calder, A. J. Keane, J. Manly, T. Sprengelmeyer, R. Scott, S. Nimmo-Smith, I. (2003). Facial expression recognition across the adult life span. Neuropsychologia, 41, 195–202. [PubMed] [CrossRef] [PubMed]
Calvo, M. G. Lundqvist, D. (2008). Facial expressions of emotion (KDEF: Identification under different display-duration conditions. Behavior Research Methods, 40, 109–115. [PubMed] [CrossRef] [PubMed]
Campbell, R. Elgar, K. Kuntsi, J. Akers, R. Terstegge, J. Coleman, M. (2002). The classification of ‘fear’ from faces is associated with face recognition skill in women. Neuropsychologia, 40, 575–584. [PubMed] [CrossRef] [PubMed]
Canli, T. Desmond, J. E. Zhao, Z. Gabrieli, J. D. (2002). Sex differences in the neural basis of emotional memories. Proceedings of the National Academy of Sciences of the United States of America, 99, 10789–10794. [PubMed] [Article] [CrossRef] [PubMed]
Dyer, A. G. Found, B. Rogers, D. (2006). Visual attention and expertise for forensic signature analysis. Journal of Forensic Sciences, 51, 1397–1404. [PubMed] [CrossRef] [PubMed]
Goos, L. M. Silverman, I. (2002). Sex related factors in the perception of threatening facial expressions. Journal of Nonverbal Behavior, 26, 27–41. [CrossRef]
Green, M. J. Uhlhaas, P. J. Coltheart, M. (2005). Context processing and social cognition in schizophrenia. Current Psychology Reviews, 1, 11–22. [CrossRef]
Grimshaw, G. M. Bulman-Fleming, M. B. Ngo, C. (2004). A signal-detection analysis of sex differences in the perception of emotional faces. Brain and Cognition, 54, 248–250. [PubMed] [CrossRef] [PubMed]
Hall, J. A. Matsumoto, D. (2004). Gender differences in judgments of multiple emotions from facial expressions. Emotion, 4, 201–206. [PubMed] [CrossRef] [PubMed]
Hargrave, R. Maddock, R. J. Stone, V. (2002). Impaired recognition of facial expressions of emotion in Alzheimer's disease. Journal of Neuropsychiatry and Clinical Neurosciences, 14, 64–71. [PubMed] [CrossRef] [PubMed]
Henderson, J. M. Williams, C. C. Falk, R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33, 98–106. [PubMed] [CrossRef] [PubMed]
Hooker, C. Park, S. (2002). Emotion processing and its relationship social functioning in schizophrenia patients. Psychiatry Research, 112, 41–50. [PubMed] [CrossRef] [PubMed]
Horley, K. Williams, L. M. Gonsalvez, C. Gordon, E. (2004). Face to face: Visual scanpath evidence for abnormal processing of facial expressions in social phobia. Psychiatry Research, 127, 43–53. [PubMed] [CrossRef] [PubMed]
Kirouac, G. Doré, F. Y. (1984). Judgment of facial expressions of emotion as a function of exposure time. Perceptual and Motor Skills, 59, 147–150. [PubMed] [CrossRef] [PubMed]
Kirouac, G. Doré, F. Y. (1985). Accuracy of the judgement of facial expression of emotions as a function of sex and level of education. Journal of Nonverbal Behavior, 91, 3–7. [CrossRef]
Knox, L. Douglas, J. (2009). Brain and Cognition 69,. [.
Lee, T. M. Liu, H. L. Hoosain, R. Liao, W. T. Wu, C. T. Yuen, K. S. (2002). Gender differences in neural correlates of recognition of happy and sad faces in humans assessed by functional magnetic resonance imaging. Neuroscience Letters, 333, 13–16. [PubMed] [CrossRef] [PubMed]
Leigh, R. J. Zee, D. S. (1999). The neurology of eye movements. New York: Oxford University Press.
Loughland, C. M. Williams, L. M. Gordon, E. (2002a). Schizophrenia and affective disorder show different visual scanning behaviour for faces: A trait versus state-based distinction? Biological Psychiatry, 52, 338–348. [PubMed] [CrossRef]
Loughland, C. M. Williams, L. M. Gordon, E. (2002b). Visual scanpaths to positive and negative facial emotions in an outpatient schizophrenia sample. Schizophrenia Research, 55, 159–170. [PubMed] [CrossRef]
Loughland, C. M. Williams, L. M. Harris, A. W. (2004). Visual scanpath dysfunction in first-degree relatives of schizophrenia probands: Evidence for a vulnerability marker? Schizophrenia Research, 67, 11–21. [PubMed] [CrossRef] [PubMed]
Mandal, M. K. Palchoudhury, S. (1985). Perceptual skill in decoding facial affect. Perceptual and Motor Skills, 60, 96–98. [PubMed] [CrossRef] [PubMed]
Matsumoto, D. Ekman, P. (2004). Japanese and Caucasian facial expressions of emotion (JACFEE) and neutral faces (JACNeuF). Berkeley, CA: Paul Ekman & Associates.
Miyahira, A. Morita, K. Yamaguchi, H. Morita, Y. Maeda, H. (2000). Gender differences and reproducibility in exploratory eye movements of normal subjects. Psychiatry and Clinical Neurosciences, 54, 31–36. [PubMed] [Article] [CrossRef] [PubMed]
Miyahira, A. Morita, K. Yamaguchi, H. Nonaka, K. Maeda, H. (2000). Gender differences of exploratory eye movements: A life span study. Life Sciences, 68, 569–577. [PubMed] [CrossRef] [PubMed]
Montagne, B. Kessels, R. P. Frigerio, E. de Haan, E. H. Perrett, D. I. (2005). Sex differences in the perception of affective facial expressions: Do men really lack sensitivity? Cognitive Processing, 6, 136–141. [PubMed] [CrossRef] [PubMed]
Noton, D. Stark, L. (1971). Eye movements and visual perception. Scientific American, 224, 35–43. [PubMed] [PubMed]
Nowicki, Jr., S. Hartigan, M. (1988). Accuracy of facial affect recognition as a function of locus of control orientation and anticipated interpersonal interaction. Journal of Social Psychology, 128, 363–372. [PubMed] [CrossRef] [PubMed]
Ogrocki, P. K. Hills, A. C. Strauss, M. E. (2000). Visual exploration of facial emotion by healthy older adults and patients with Alzheimer disease. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 13, 271–278. [PubMed] [PubMed]
Palermo, R. Coltheart, M. (2004). Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behavior Research Methods, Instruments, & Computers, 36, 634–638. [PubMed] [CrossRef]
Pelphrey, K. A. Sasson, N. J. Reznick, J. S. Paul, G. Goldman, B. D. Piven, J. (2002). Visual scanning of faces in autism. Journal of Autism and Developmental Disorders, 32, 249–261. [PubMed] [Article] [CrossRef] [PubMed]
Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32, 3–25. [PubMed] [CrossRef] [PubMed]
Rahman, Q. Wilson, G. D. Abrahams, S. (2004). Sex, sexual orientation, and identification of positive and negative facial affect. Brain and Cognition, 54, 179–185. [PubMed] [CrossRef] [PubMed]
Rotter, N. G. Rotter, G. S. (1988). Sex differences in the encoding and decoding of negative facial emotions. Journal of Nonverbal Behavior, 12, 139–148. [CrossRef]
Rutherford, M. D. Towns, A. M. (2008). Scan path differences and similarities during emotion perception in those with and without autism spectrum disorders. Journal of Autism and Developmental Disorders, 38, 1371–1381. [PubMed] [Article] [CrossRef] [PubMed]
Scholten, M. R. Aleman, A. Montagne, B. Kahn, R. S. (2005). Schizophrenia and processing of facial emotions: Sex matters. Schizophrenia Research, 78, 61–67. [PubMed] [CrossRef] [PubMed]
Sullivan, S. Ruffman, T. Hutton, S. B. (2007). Age differences in emotion recognition skills and the visual scanning of emotion faces. Journals of Gerontology B: Psychological Sciences and Social Sciences, 62, P53–P60. [PubMed] [CrossRef]
Thayer, J. F. Johnsen, B. H. (2000). Sex differences in judgement of facial affect: A multivariate analysis of recognition errors. Scandinavian Journal of Psychology, 41, 243–246. [PubMed] [CrossRef] [PubMed]
(2006). User manual: Tobii eye tracker and ClearView analysis software. Stockholm, Sweden: © Tobii Technology AB.
Wager, T. D. Phan, K. L. Liberzon, I. Taylor, S. F. (2003). Valence, gender, and lateralization of functional brain anatomy in emotion: A meta-analysis of findings from neuroimaging. Neuroimage, 19, 513–531. [PubMed] [CrossRef] [PubMed]
Wagner, H. L. MacDonald, C. J. Manstead, A. S. R. (1986). Communication of individual emotions by spontaneous facial expressions. Journal of Personality and Social Psychology, 50, 737–743. [CrossRef]
Watts, A. J. Douglas, J. M. (2006). Interpreting facial expression and communication competence following severe traumatic brain injury. Aphasiology, 20, 707–722. [CrossRef]
Williams, L. M. Loughland, C. M. Gordon, E. Davidson, D. (1999). Visual scanpaths in schizophrenia: Is there a deficit in face recognition? Schizophrenia Research, 40, 189–199. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Schematic representation of stimulus sequence.
Figure 1
 
Schematic representation of stimulus sequence.
Figure 2
 
Mean response time for each correctly labeled emotion. Error bars represent the SD. Significant main effects of sex and emotion were observed (* p < 0.05; ** p < 0.001).
Figure 2
 
Mean response time for each correctly labeled emotion. Error bars represent the SD. Significant main effects of sex and emotion were observed (* p < 0.05; ** p < 0.001).
Figure 3
 
(A) Duration and (B) number of fixations to predefined AOIs with emotion collapsed (* p < 0.05, ^ p = 0.05). RE = stimulus' right eye, LE = stimulus' left eye.
Figure 3
 
(A) Duration and (B) number of fixations to predefined AOIs with emotion collapsed (* p < 0.05, ^ p = 0.05). RE = stimulus' right eye, LE = stimulus' left eye.
Table 1
 
Participant characteristics
Table 1
 
Participant characteristics
Age, years # Education, years Years living in Australia Handedness*
Male ( n = 23) 25.57 (5.45) 18.30 (4.34) 21.91 (7.98) 0.87 (0.42)
Female ( n = 27) 22.85 (2.96) 16.41 (1.37) 22.59 (2.81) 0.87 (0.35)
 

Note: Values provided as mean ( SD).

 

# p = 0.04.

 

*−1.00 is extremely left-handed, +1.00 is extremely right-handed (Bryden, 1977).

Table 2
 
Mean ( SD) total correct responses for each emotion. Score out of 3.
Table 2
 
Mean ( SD) total correct responses for each emotion. Score out of 3.
Angry Anxious Sad Disgusted Happy Surprised
Male 2.74 (0.62) 1.74 (0.75) 2.61 (0.66) 2.35 (0.83) 2.87 (0.34) 2.96 (0.21)
Female 2.70 (0.61) 1.67 (0.78) 2.52 (0.64) 2.52 (0.64) 2.85 (0.36) 3.00 (0.00)
Total 2.72 (0.61) 1.70 (0.76) 2.56 (0.64) 2.44 (0.73) 2.86 (0.35) 2.98 (0.14)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×