Free
Research Article  |   February 2009
Viewpoint and center of gravity affect eye movements to human faces
Author Affiliations
Journal of Vision February 2009, Vol.9, 7. doi:https://doi.org/10.1167/9.2.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Markus Bindemann, Christoph Scheepers, A. Mike Burton; Viewpoint and center of gravity affect eye movements to human faces. Journal of Vision 2009;9(2):7. https://doi.org/10.1167/9.2.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization ( Experiment 1) and a free-viewing task ( Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time.

Introduction
How are human faces processed? This question has inspired psychological research for decades (see, e.g., Bruce & Young, 1998) and has influenced the understanding of a wide range of important issues, such as human social interaction (see, e.g., Kleinke, 1986), disorders of visual perception (see, e.g., Ellis, Young, Quayle, & De Pauw, 1997; Morrison, Bruce, & Burton, 2001; Moscovitch, Winocur, & Behrmann, 1997), the reliability of eyewitness testimonies (e.g., Burton, Wilson, Cowan, & Bruce, 1999; Jenkins & Burton, 2008a; Megreya & Burton, 2006), and automatic person recognition systems (see, e.g., Jenkins & Burton, 2008b; Sinha, Balas, Ostrovsky, & Russell, 2006). Only a small proportion of this work has used eye movements to study face perception. This is surprising, as eye fixations are necessary to encode, identify and remember the details of visual objects, and considering that information from eye movements provides a real-time basis for observing internal visual processing (for reviews, see Henderson, 2003, 2007; Rayner, 1998). Moreover, previous studies that have used eye movements to investigate face perception have focused exclusively on frontal face stimuli (Althoff & Cohen, 1999; Haith, Bergman, & Moore, 1977; Henderson, Williams, & Falk, 2005; Janik, Wellens, Goldberg, & Dell'Osso, 1978; Walker-Smith, Gale, & Findlay, 1977), despite the fact that faces are frequently encountered in a non-frontal view. In this study, we explore this gap in knowledge by measuring eye movements during the presentation of frontal, mid-profile and profile views. To anticipate, we found that the majority of fixations were directed at the eyes and, to a lesser extent, the nose and mouth, consistent with previous studies in this field. However, changes in viewpoint induced qualitative shifts in the sampling behavior of facial features, particularly shortly after face onset. This pattern arises from the center-of-gravity effect, which draws initial fixations invariably to the geometric center of a stimulus. This effect has gone unnoticed in previous research, because this region coincides with the location of the eyes and nose in frontal faces. 
Although faces are seen frequently in a non-frontal pose, perhaps on more than 75% of all encounters (see, e.g., Li & Zhang, 2004; see also Baddeley & Woodhead, 1983), most psychological research on human face processing has focused on the perception of frontal face images. This is remarkable as the appearance of faces varies in some fundamental ways across different viewpoints. The eyes, for example, are one of the most distinguishing features for human face detection (Lewis & Edmonds, 2003) and for the computerized perception of frontal face images (e.g., Fasel, Fortenberry, & Movellan, 2005; Viola & Jones, 2004). But while a frontal face image displays a contiguous pair of eyes, only a solitary eye is visible in profile faces. Facial features such as the nose, mouth and ears are also seen from radically different angles and appear in different spatial locations for profile and frontal faces. 
This variability is such that a frontal face view, for example, fails to convey accurate information about a person's profile, and a face profile does not provide full information about other views. This drop-off in performance increases linearly as the differences between two face views increases (see, e.g., Burke, Taubert, & Higman, 2007; Liu & Chaudhuri, 2002; Newell, Chiroro, & Valentine, 1999; O'Toole, Edelman, & Bülthoff, 1998). However, even slight variations in viewpoint can lead to surprisingly poor performance when subjects are asked to match two unfamiliar faces (Hancock, Bruce, & Burton, 2000). In line with these observations, face adaptation effects are also substantially reduced when viewpoint is changed between adaptation and test faces (Jeffery, Rhodes, & Busey, 2006). This demonstrates that face encoding is view-specific, and, considering that faces are frequently encountered in a non-frontal pose, suggests that the facility to process different viewpoints is crucial for all human tasks with faces (i.e., detection, identification, gender classification, expression analysis, etc.). And yet, this is an aspect of face perception that is very poorly understood. 
Eye movements provide a sensitive, real-time measure of visual processing (see Henderson, 2003, 2007; Rayner, 1998), and hold great promise for studying how different face views are perceived. So far, eye tracking studies of face perception have shown that the eye regions are fixated most frequently, followed by the nose and mouth (Haith et al., 1977; Janik et al., 1978; Walker-Smith et al., 1977), but specific eye movement strategies for viewing these features appear to vary widely across individuals (Walker-Smith et al., 1977), and across different studies (see also Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Despite this, eye movements are evidently functional in face perception. Eye movements to familiar faces, for example, elicit fewer fixations than to unfamiliar faces, and involve the sampling of fewer face regions (Althoff & Cohen, 1999; Heisz & Shore, 2008; see also Luria & Strauss, 1978). Similarly, recognition memory for faces deteriorates dramatically when eye movements during face learning are restricted (Henderson et al., 2005). 
All of the studies cited above have a shortcoming, in that eye movements were recorded only for viewing of frontal faces. These studies therefore provide limited information about how other face views are processed. Indeed, the use of frontal faces only is problematic in itself, because the location of the features of most interest—the eyes, nose and mouth—is inevitably confounded with the center of a face. This problem is exacerbated because previous studies have presented faces in the center of the screen, immediately after a preceding central fixation dot. This raises the possibility that the affinity to the eyes and nose in frontal faces can be attributed in part to the center-of-gravity effect or the global effect, which refers to the tendency of eye saccades to land at the center of a target object or a target configuration (see, e.g., Coren & Hoenig, 1972; Findlay, 1981, 1982; Findlay & Gilchrist, 1997; He & Kowler, 1989). So far, however, the center-of-gravity effect has been largely neglected in the face domain. This is important theoretically because, if this effect is also present in face perception, then it is impossible to establish whether initial eye movements are driven by the center-of-gravity effect or by specific facial features, when studying frontal faces alone. 
The purpose of the present study was two-fold. The first aim was to determine how different face views are processed, by quantifying which facial features are fixated in frontal, mid-profile and profile views. The second, closely related aim was to investigate the contribution of the center-of-gravity effect to face perception. To investigate these questions, observers were shown one face at a time, which could appear in one of four off-center onscreen locations. Face location was varied in this manner to ensure that the observers were not inherently fixating the center of a face at the start of a trial. In Experiment 1, participants classified these faces according to their gender, to measure eye movements under confined task demands. The rationale for this task was that gender decisions can be made relatively quickly compared to other face decisions (see, e.g., Bindemann, Burton, & Jenkins, 2005; Le Gal & Bruce, 2002), thus limiting the number of facial features that are sampled before a decision is reached. In Experiment 2, a free-viewing task was then employed to record spontaneous eye movements during face viewing. This was done to encourage the sampling of a range of face regions, rather than confining viewing patterns through specific task demands. 
Experiment 1
Method
Participants
Sixteen undergraduate students from the University of Glasgow participated in this task. All had normal vision and received a small fee for participation. 
Stimuli
The stimuli consisted of high quality color photographs of the faces of 20 different models (10 male), which were depicted in 5 different poses (frontal, mid-profile left, mid-profile right, profile left, and profile right), giving a total of 100 different images. Photographs of mid-profile faces were taken so that both eyes remained visible. The face images were cropped to remove extraneous background and scaled to a height of 384 pixels (13.55 cm at a resolution of 72 pixels/inch), while width was adjusted accordingly to preserve the relative image dimensions. Thus, faces were not shape-normalized, to capture natural variability inherent in faces (see, e.g., Farkas, 1994). The faces were then superimposed on a 512 (W) × 384 (H) pixel white background for presentation. During the experiment, the faces could appear in one of four onscreen locations, corresponding to the top-left, top-right, bottom-left, and bottom-right quadrant of a 1024 × 768 screen display. Figure 1 illustrates the five experimental conditions. 
Figure 1
 
An example of the faces used in Experiment 1. Faces were presented in (from left to right) profile left, mid-profile left, frontal, mid-profile right, and profile right pose.
Figure 1
 
An example of the faces used in Experiment 1. Faces were presented in (from left to right) profile left, mid-profile left, frontal, mid-profile right, and profile right pose.
Procedure
The stimuli were displayed using SR-Research ExperimentBuilder software (Version 1.4.2) on a 21 inch color monitor that was connected to an SR-Research Eyelink II head-mounted eye tracking system running at 500 Hz sampling rate. Viewing was binocular, but only the participants' dominant eye was tracked. To calibrate the eye tracker, participants fixated a series of nine fixation targets on the display monitor. Calibration was then validated against a second sequence of nine fixation targets. If the latter indicated poor measurement accuracy, calibration was repeated. This procedure was carried out at the beginning of the experiment and every 25 trials thereafter. 
Each trial began with the presentation of a single centrally located dot, which participants were asked to fixate so that an automatic drift correction could be performed. While the participant fixated this dot, the experimenter pressed a button to initiate a trial. A face stimulus was then displayed for a maximum of 3000 msec or until a response was made. Recall that the face stimuli always appeared eccentrically in one of four possible quadrants of the screen. This ensured that the central fixation dot at the beginning of each trial did not coincide with any of the critical face regions (i.e. the first fixation on the face would always follow a saccade from the central fixation point). Participants were instructed to classify the faces as male or female, using their index fingers to press the corresponding keys on a button pad. Participants were asked to respond as quickly as possible without making any errors. 
Each participant completed 100 trials, so that each of the twenty face identities was shown once in each of the five conditions. For each participant, the face stimuli were equally likely to appear in each of the four onscreen locations. The presentation of stimuli was counterbalanced across participants, so that each face appeared in each location an equal number of times over the course of the experiment. The five experimental conditions were randomly intermixed, and participants were given a short break every 25 trials followed by a re-calibration phase. 
Results
The main results can be summarized as follows. Participants initially fixated the center of a face stimulus, independent of face view. This central bias induced a qualitative shift in the features that were sampled across the different face views in the time period immediately after stimulus onset. This effect is most striking in the profile face condition; while the eyes and nose were fixated in frontal faces and the most centrally located eye in mid-profile faces, eye movements to profile faces initially failed to land on specific facial features. Thereafter, observers fixated the only eye that was visible in profile faces, consistent with the strong interest for the eye region in the other face views. 
Behavioral performance
To analyze performance in the gender categorization task, a one-factor ANOVA (frontal face, mid-profile left, mid-profile right, profile left, profile right) was conducted on the mean reaction times and percentage errors. Errors were made on 1.3% of trials, indicating compliance with the task demands, and were evenly distributed across conditions, F(4, 60) = 1.37, p = 0.25. Similarly, response times to faces did not show an effect of face view, F(4, 60) = 1.54, p = 0.20 (frontal face, 707 msec; mid-profile left, 722 msec; mid-profile right, 693 msec; profile left, 701 msec; profile right, 704 msec). 
Data processing
Eye-movements were processed from face onset for the purpose of aggregating fixation locations and durations. An automatic procedure was used to pool short contiguous fixations. Fixations shorter than 80 ms were integrated with the immediately preceding or following fixation if that fixation lay within half a degree of visual angle, otherwise the fixation was excluded. The rationale for this was that such short fixations usually result from false saccade planning and are unlikely to reflect meaningful information processing (see Rayner & Pollatsek, 1989). In case an eye-blink occurred, its duration was added to the immediately preceding fixation (processing is unlikely to pause during a blink). The spatial coordinates of eye fixations from all trials were then normalized according to the four possible onscreen locations. 
Dispersion of eye fixations
To quantify the dispersion of fixations, we first plotted all individual fixations across three time intervals, covering 0–250 msec, 250–500 msec and 500–1000 msec after face onset (see Figure 2). As can be seen in Figure 2, a proportion of fixations are located in the four corners of the graph during the first time interval (0–250 msec). These fixations arise from the pooling of the four onscreen locations and correspond to the central fixation point, as the face stimuli were presented in the four quarters of the screen during the experiment, relative to the center. 
Figure 2
 
An illustration of the dispersion of fixations as a function of experimental condition across three time intervals, spanning 0–250 msec, 250–500 msec and 500–1000 msec from face onset. For illustration purposes, the fixations are superimposed on an example face from Experiment 1.
Figure 2
 
An illustration of the dispersion of fixations as a function of experimental condition across three time intervals, spanning 0–250 msec, 250–500 msec and 500–1000 msec from face onset. For illustration purposes, the fixations are superimposed on an example face from Experiment 1.
In the same time interval, eye saccades were initiated towards the faces. The distribution of these fixations indicates that eye fixations were clustered initially around the same spatial location—the center of the face stimuli—in all of these conditions. In frontal faces, this location corresponds to the area between the eyes, nose and forehead. In mid-profile faces, the most centrally located ROI is the innermost eye, that is, the right eye in mid-profile left faces and the left eye in mid-profile right faces, and the majority of fixations also appeared to fall on this feature. However, this central fixation bias is most apparent in the profile conditions. Here, initial fixations did not fall on specific facial features, but halfway between the eye and ear. 
In subsequent time intervals, fixations continue to fall closest to the eyes and nose in frontal faces. Similarly, by the 500–1000 msec time interval, the fixations are closely clustered around the innermost eye in mid-profile faces, and the only eye in profile faces. However, in profile faces, this involves a shift in sampling behavior away from the center of a face. Thus, as the eyes are increasingly looked at in profile view, the central area that was fixated predominantly during the first time interval is gradually vacated. 
First fixations
To quantify the dispersion of fixations, we next examined the proportion of first fixations to a set of predefined regions of interest (ROI), to assess the extent to which specific facial features were looked at upon the detection of a face. For frontal face stimuli, these ROIs correspond to the left eye, right eye, nose, mouth, left ear, and right ear. These ROIs were determined on an individual basis for each face stimulus and are consistent with those defined in previous eye movement studies (see, e.g., Althoff & Cohen, 1999; Henderson et al., 2005). The remaining visible area of the face was classified either as hair or face-other. For the mid-profile left condition, the same ROIs were employed except the left ear, which was no longer visible in this condition. The corresponding set of ROIs was used for the mid-profile right condition. The ROIs for the profile left condition corresponded to the right eye, nose, mouth, right ear, hair and face-other. Again, the corresponding set of ROIs was used for the profile right condition. Note that these features are labeled from an observer's perspective. Therefore, the left eye in frontal and mid-profile faces, for example, refers to the eye that appears closest to an observer's left. These ROIs are illustrated in Figure 3
Figure 3
 
An illustration of the regions of interest used in the analysis (left eye, right eye, nose, mouth, left ear, right ear, hair, face-other). Note that the color coding corresponds to Figures 4 and 5.
Figure 3
 
An illustration of the regions of interest used in the analysis (left eye, right eye, nose, mouth, left ear, right ear, hair, face-other). Note that the color coding corresponds to Figures 4 and 5.
First fixations were defined as the earliest fixation that was made after face onset and are summarized in Figure 4. In frontal faces, 36% of first fixations fell on the eye regions and 30% fell on the nose. The left eye received more fixations than the right eye (23% vs. 13%), consistent with a left visual field bias for face processing (see, e.g., Burt & Perrett, 1997; Butler et al., 2005). In comparison, about half of all first fixations were directed at the innermost eye in mid-profile faces, whereas each of the other facial features were fixated on less than 10%. The pattern of first fixations to mid-profile faces therefore differs markedly from the frontal face condition. Inspection of the profile face conditions suggests a further shift in the features that were sampled initially. For these viewpoints, only very few saccades fell on specific features but landed mostly on the face-other and the hair region. 
Figure 4
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 1.
Figure 4
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 1.
The different ROIs could vary in size within a face, and the same ROIs also varied in size across different viewpoints (see Figure 3). This raises the possibility that the percentage fixation data reflects the relative areas of the ROIs, rather than the absolute interest that a particular region holds for an observer. In a next step, the first fixation scores were therefore area-normalized to address this issue. This was achieved by dividing the percentage of fixations to a ROI by the size of the ROI, which was expressed as the percentage of the total area of a face. As a result of this adjustment, a score of close to one indicates that an area is fixated randomly, whereas scores significantly greater than one indicate that a specific region is targeted (see, e.g., Fletcher-Watson, Findlay, Leekam, & Benson, 2008). Figure 4 also illustrates the area-normalized first fixation scores for all conditions. 
Compared to the chance level of one, the area-normalized scores revealed a significant proportion of first fixations to the nose region, t 15 = 5.06, p < 0.01, and the left eye in frontal faces, t 15 = 3.21, p < 0.01, but not to the right eye, t 15 = 1.51, or any of the other features. In mid-profile left faces, the right eye was the main recipient of first fixations, t 15 = 7.75, p < 0.01, and likewise, the left eye received a high proportion of fixations in mid-profile right faces, t 15 = 9.14, p < 0.01. Finally, in profile left faces, the right eye, t 15 = 2.62, p < 0.01, and the face-other region received a significant number of fixations, t 15 = 2.67, p < 0.01. In profile right faces, the left eye and the face-other region were also fixated above chance, t 15 = 2.97, p < 0.01 and t 15 = 5.44, p < 0.01, respectively. The area-adjusted scores are therefore consistent with the non-normalized percentage fixations in demonstrating a qualitative shift across viewpoints in the ROIs that are sampled immediately after face onset. 
Percentage fixations over time
In addition to the first fixation data, the overall percentage of fixations to each ROI was also analyzed from face onset until a response was registered, by splitting the eye movement data into 50-msec time bins. The percentage fixations and area-normalized scores across these time intervals are illustrated in Figure 5. In this figure, normalized scores with filled circles indicate ROIs that are fixated above chance (chance ≤1; see First fixations section), as analyzed via a series of uncorrected one-sample t-tests ( p < 0.05). 
Figure 5
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views as a function of time in Experiment 1. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
Figure 5
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views as a function of time in Experiment 1. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
As can be seen in Figure 5, faces were fixated from 200 msec after stimulus onset. Consistent with the first fixation data, these fixations were clustered around the three central features in frontal faces; the nose, and the left eye and right eye. The normalized scores show that the nose and the left eye were fixated above chance from 200 msec onwards, whereas the right eye required another 100 msec to do so. In mid-profile and profile faces, the majority of fixations were devoted to a single facial feature. This ROI corresponds to the innermost-located eye in mid-profile faces, and the only eye that was visible in profile faces. However, in profile faces none of the facial features were fixated reliably before 250 msec, indicating a slight delay in comparison to the frontal and mid-profile conditions. This observation concurs with the dispersion of fixations (see Figure 2), which shows that, as observers were initially drawn to center of a face stimulus, they failed to fixate specific facial features in profile faces. 
Discussion
This experiment shows that the eye regions and nose are fixated predominantly in frontal faces. This is consistent with previous studies, which have obtained similar viewing biases towards these features (e.g., Althoff & Cohen, 1999; Henderson et al., 2005). By comparison, a dominant viewing bias was observed towards the innermost eye in mid-profile faces and the only visible eye in profile faces, indicating that viewpoint influences the features that are looked at in a face. It is notable, however, that the most striking differences were observed shortly after face onset; while the eyes and nose were fixated in frontal faces and the innermost eye in mid-profile faces, initial eye movements to profile faces failed to land on specific facial features. The distribution of fixations, first fixations and timeline data concur to show that this arises from a tendency to look initially at the center of a face stimulus, before observers target features directly. These findings suggest that initial fixations to a face are not determined by specific facial features, for example, such as the eyes (see, e.g., Lewis & Edmonds, 2003), but arise from the center-of-gravity effect, whereby observers are at first drawn to the geometric center of an object (see, e.g., Coren & Hoenig, 1972; Findlay, 1982; Findlay & Gilchrist, 1997). 
In light of these findings, it is remarkable that there was no discernible effect of viewpoint on the speed or accuracy with which gender decisions were made. This indicates that each of the face views provide sufficiently strong gender cues to make this decision with relative ease, independent of whether initial fixations fall on specific features, such as the eyes and nose. Indeed, although the eye movement data for the frontal face condition is qualitatively similar to other studies in this field (Althoff & Cohen, 1999; Haith et al., 1977; Henderson et al., 2005; Janik et al., 1978; Walker-Smith et al., 1977), the mid-profile and profile conditions show that eye fixations were surprisingly restricted in this task, and confined largely to only a solitary facial feature in non-frontal face views. This raises the question whether eye movements are generally more constrained in mid-profile and profile views, or whether facial features are sampled more broadly when the task demands provide greater freedom to look at all of the face regions. To examine this possibility, these faces were shown under free-viewing conditions in Experiment 2. The aim of this task was to record spontaneous eye movements during face viewing, rather than to confine viewing patterns through specific task demands. Thus, participants were told to look at the faces throughout the experiment, but were given no further instructions. 
Experiment 2
Method
Participants
Twenty new undergraduate students from the University of Glasgow participated in this task. All had normal vision and received a small fee or course credit for participation. 
Stimuli and procedure
The stimuli and procedure were identical to Experiment 1, except that the study investigated spontaneous eye movements with a free-viewing task. Therefore, participants were encouraged to move their eyes freely over each face and to make as many eye movements as they wished. As in Experiment 1, each trial began with the presentation of a single centrally located dot, which participants were asked to fixate so that an automatic drift correction could be performed. While the participant fixated this dot, the experimenter pressed a button to initiate a trial. A face stimulus was then displayed for 3000 msec. Each participant completed 100 randomly intermixed trials, so that each of the twenty face identities was shown once in each of the five conditions. Short breaks were given every 25 trials, followed by a re-calibration phase. 
Results
The results of this experiment can be summarized as follows. As in Experiment 1, participants initially fixated the center of a face, independent of face view. This center-of-gravity effect again induced a qualitative shift in the features that were sampled across different views for a short time period following face onset. Overall, however, the faces were sampled more broadly in the free-viewing task than in Experiment 1, encompassing, to a limited extent, fixations to both eyes in mid-profile faces, the ear in profile faces, and the nose and mouth in all of the conditions. 
Dispersion of eye fixations
Eye movements were pre-processed and analyzed in the same way as in Experiment 1. In a first step, the distribution of individual fixations was plotted across five time intervals, spanning 0–250 msec, 250–500 msec, 500–1000 msec, 1000–2000 msec, and 2000–3000 msec after face onset. In addition, eye fixations for the full trial period are shown, from 0 to 3000 msec (see Figure 6). 
Figure 6
 
An illustration of the dispersion of fixations as a function of experimental condition across five time intervals, spanning 0–250 msec, 250–500 msec, 500–1000 msec, 1000–2000 msec and 2000–3000 msec from face onset. In addition, the fixations for the full trial period are shown (0–3000 msec).
Figure 6
 
An illustration of the dispersion of fixations as a function of experimental condition across five time intervals, spanning 0–250 msec, 250–500 msec, 500–1000 msec, 1000–2000 msec and 2000–3000 msec from face onset. In addition, the fixations for the full trial period are shown (0–3000 msec).
As in Experiment 1, fixations clustered around the center of a face in the first time interval. This central area covers the eyes and nose in frontal faces, the innermost eye in mid-profile faces, and the area between the eye and ear region in profile faces. At subsequent time intervals, eye fixations were visibly more focused on specific facial features in all of the conditions. In frontal faces, fixations continue to be allocated around the central region of a face, taking in the eye and nose regions, and to a reduced extent the mouth. In the mid-profile and profile conditions, the eye, nose and mouth regions were also fixated more directly. These viewing patterns are remarkably consistent, and rarely encompass regions outside these facial features. 
First fixations
In a next step, the percentage of first fixations was calculated for each ROI to quantify the distribution of fixations (see Figure 7). As can be seen from Figure 7, the eyes received 29% of first fixations in frontal faces. Similarly to the gender task, these were biased towards the left eye, which received more fixations than the right eye (19% vs. 10%). However, the largest proportion of first fixations fell on the area occupied by the nose (37%). In the other face views, the largest proportion of first fixations was directed at the innermost eye in mid-profile faces, while the majority of first fixations failed to land on specific facial features in profile faces, but fell mostly on the face-other and the hair region. 
Figure 7
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 2.
Figure 7
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 2.
In line with these observations, area-normalized scores revealed a significant proportion of first fixations to the nose region, t 15 = 5.89, p < 0.01, and the left eye, t 15 = 3.60, p < 0.01, but not to the right eye in frontal faces, t 15 = 1.43, compared to a chance level of one (see Experiment 1). In mid-profile left faces, the right eye, t 15 = 9.01, p < 0.01, and nose, t 15 = 2.52, p < 0.05, also received a significant proportion of first fixations. A similar pattern was found for mid-profile right faces, where the left eye was by far the most fixated facial feature, t 15 = 7.15, p < 0.01. In profile left faces, the right eye and the face-other region were fixated above chance, t 15 = 3.40, p < 0.01 and t 15 = 5.72, p < 0.01, respectively. And similarly, in profile right faces, a significant proportion of fixations fell on the eye, t 15 = 1.87, p < 0.05, the ear, t 15 = 2.64, p < 0.01, and the face-other region, t 15 = 3.89, p < 0.01. In raw percentage terms, however, the eye regions and the ear still received less than 15% of first fixations in profile faces. 
Percentage fixations over time
In a final step, the overall percentage of fixations to each ROI was analyzed over the course of the trial, covering the time period from stimulus onset (0 msec) to stimulus offset (3000 msec) in 50-msec time bins. The percentage fixations and the area-normalized scores for the trial interval are illustrated in Figure 8. As in Experiment 1, normalized scores with filled circles indicate ROIs that are fixated above chance, as analyzed via a series of one-sample t-tests ( p < 0.05). 
Figure 8
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views for the full trial interval in Experiment 2. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
Figure 8
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views for the full trial interval in Experiment 2. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
The timeline data shows that frontal faces were fixated from 150 msec after stimulus onset. The area-normalized scores indicate that the nose was fixated from 150 msec onwards, whereas it took another 50 msec and 100 msec, respectively, to direct a significant number of fixations to the left and right eye. These three ROIs maintained this advantage until the face was removed from view. In addition, the mouth was fixated with some consistency between 1250 and 2050 msec. 
In mid-profile left faces, the nose and right eye were fixated from 150 to 200 msec after face onset. This advantage persisted throughout the trial, although, of these two features, the right eye consistently received the greater number of fixations. The left eye and mouth were also fixated above chance from 750 msecs after face onset, but this interest was not sustained throughout the subsequent trial period. Eye movements to mid-profile right faces largely mirrored their left-sided counterparts, with a significant proportion of fixations falling on the left eye from 200 msec after face onset and the nose from 350 msec onwards. The right eye and mouth were also fixated between 750 and 3000 msec, but the majority of fixations was devoted to the left eye throughout, indicating that this feature was pivotal to the participants viewing interests. 
As in Experiment 1, fixations to specific facial features were slightly delayed in the profile conditions. In profile left faces, the right eye was fixated from 250 msec onwards, and the nose and mouth were also viewed with some consistency from 350 and 450 msec onwards, respectively. In addition, and unlike frontal and mid-profile faces, observers devoted a small proportion of fixations to the ear, between 1300 and 2450 msec after face onset. In profile right faces, the largest proportion of fixations fell also on the eye, lasting from 300 msec until face offset, and the nose and mouth were looked at to a smaller extent from 350 and 600 msec onwards, respectively. A slight departure from the profile left condition is that the first feature to receive a significant proportion of fixations after face onset was the ear, although overall, this accounted for less than 15% of non-normalized fixations. Indeed, consistent with the distribution of fixations and the first fixation data, the non-normalized scores suggest that observers were mostly fixating the hair and face-other regions of profile faces during the early stages of a trial, rather than looking at specific features. 
Discussion
The distributions of fixations, the first fixation data and the timeline data reveal distinct viewing patterns for frontal, mid-profile and profile faces. As in Experiment 1, the results suggest that these viewing patterns arise for two reasons. During the early stages of a trial, eye fixations are determined by the center-of-gravity effect. Thus, observers initially fixate the same spatial location in each condition, corresponding to the geometric center of a face stimulus. This leads to differences in the facial features that are viewed, depending on their relative distance to the face center. Both eyes and the nose in frontal faces and the innermost eye in mid-profile faces are inherently fixated at this stage due to their close proximity to the geometric center of a face. In contrast, the same features are not fixated in profile faces at this point, as a result of their more peripheral location in this particular face view. 
In addition, the results show a second pattern that takes hold after the center of gravity effect, whereby eye fixations are directed to specific facial features. Thus, both eyes and the nose received a similar proportion of fixations between 500 msec and the end of the trial in frontal faces. Overall, however, the total eye region (left eye and right eye) accumulated more fixations than the nose in this face view. This pattern converges with the four non-frontal conditions, which showed that the innermost eye in mid-profile faces and the only visible eye in profile faces receive the majority of fixations throughout the trial interval, with the nose being the second-most fixated feature after the eyes. In addition, the mouth is also fixated consistently, albeit to an even lesser extent than the nose, in all of these conditions. These findings demonstrate that observers were able to sample more facial features under free-viewing in comparison to the gender categorization task of Experiment 1. More importantly, these results suggest that, once the center-of-gravity effect is overcome, observers are generally interested in the same facial features across different viewpoints. We return to a full discussion of these findings in the General discussion section. 
General discussion
This study explored how faces are perceived across different viewpoints, by measuring eye movements during the viewing of frontal, mid-profile and profile faces. Eye movements were recorded during a gender categorization task in Experiment 1, to examine participants' visual scanning behavior under confined task demands, and with a free-viewing paradigm in Experiment 2, to measure spontaneous eye movements during face viewing. In both experiments, two distinct effects influenced eye fixations to faces. During the early stages of a trial, in the period immediately after face onset, observers inherently fixated the geometric center of a stimulus. As a consequence, the eye regions and the nose were predominantly fixated in frontal faces, due to their central position within a face. Likewise, observers initially fixated the innermost facial feature in mid-profile faces, which corresponds to the right eye in mid-profile left faces and the left eye in mid-profile right faces. However, the tendency to initially fixate the geometric center of a stimulus was most striking in the profile face conditions. Here, the majority of initial fixations failed to land on specific facial features, but were confined to the region between the eye and the ear (see Figures 2 and 6). 
We suggest that this pattern arises from the center-of-gravity effect, or global effect, in saccade programming, which stipulates that initial eye saccades are inevitably drawn to the central region of a target (see, e.g., Findlay, 1982; Findlay & Gilchrist, 1997). In our study, this central bias was found in both experiments, suggesting that this effect was not affected by the different task demands. Moreover, in contrast to previous eye tracking studies with faces, this effect is not an artifact that could have arisen simply from the central presentation of a face, as faces were rotated randomly around four possible onscreen locations in the present experiments (see, e.g., Althoff & Cohen, 1999; Henderson et al., 2005). 
A second effect emerges after the center-of-gravity bias has exerted its initial influence, which appears to be driven by the observers' attention to specific facial features. In Experiment 1, this was manifested in a consistent interest in the eye regions across all of the face views. This advantage is remarkable given that the appearance of the eyes varied greatly across viewpoints. For example, whereas the eyes in frontal faces provide a scleral contrast either side of a round pupil, in profile faces the pupil comprises a flat elliptical shape with a one-sided scleral contrast (see Figure 1). This indicates that observers' viewing patterns were strongly influenced by the features of a face, and particularly the eyes, but suggests that this behavior may not depend on the specific visual signature of this facial feature. 
In Experiment 1, a significant proportion of fixations was also directed at the nose in frontal faces. This effect was not replicated in the mid-profile and profile conditions, despite the fact that this feature was also clearly visible in these face views, which suggests that the nose is not of crucial interest for making a gender decision. Indeed, it is possible that the nose is fixated inadvertently in frontal faces, because of its position between the eye regions. It is also noteworthy that viewpoint had little effect on the speed and accuracy with which gender decisions were made, particularly considering that fixations to the region of most interest, the eyes, were delayed in the profile conditions (see Figure 4). One possible explanation for this finding is that faces contain a range of salient gender cues that are available independent of which face view is seen and which part of a face is fixated. Alternatively, gender could be derived from holistic facial information, leaving little benefit in relying on individual features in this task (see Baudouin & Humphreys, 2006). These explanations are phenomenologically distinct but merge on the basis that fixations to specific facial features may not be necessary for making a gender decision. This notion receives some support from distractor interference tasks, in which gender information is obtained reliably from unattended faces (Bindemann et al., 2005). And yet, eye movements were clearly not made at random in Experiment 1, but clustered consistently around the eye regions. 
Experiment 2 provides further evidence that observers targeted specific facial features after the center-of-gravity effect. The aim of the free-viewing task was to provide greater freedom to look at the face stimuli, rather than confining viewing behavior through specific task demands. In line with this aim, observers now fixated the nose and mouth region in all of the viewpoints, and also the more peripheral eye in mid-profile faces (the left eye in mid-profile left faces, and the right eye in mid-profile right faces). This demonstrates that fixations are not generally confined to the eye regions during the viewing of mid-profile and profile faces, as was the case in Experiment 1, but that observers will look at other facial features if allowed to do so by the task demands. Nevertheless, Experiment 2 shows that observers were mostly interested in the eye regions, followed by the nose, and then the mouth in all of the conditions. This suggests that observers are interested in the same visual features, and to a similar extent, across different face views (see Figure 8). Thus, although the center-of-gravity effect induces distinct qualitative shifts in the face regions that are sampled in different viewpoints shortly after face onset, the results also show feature-driven similarities in the manner in which different viewpoints are perceived once the center-gravity-effect has been overcome. 
Some aspects of our findings also hint at interactions between the location and the type of facial feature that is fixated. In frontal faces, for example, both eyes are fixated to a similar extent in both experiments. In mid-profile faces, on the other hand, the innermost eye receives the majority of fixations, whereas the peripherally located eye received less than 10 percent of fixations at any point in time in both experiments. This is perhaps surprising, considering that both eyes are clearly visible in frontal and mid-profile faces. Moreover, this pattern holds when the eye regions in mid-profile faces are normalized for differences in surface area, and is found despite the fact that the peripheral eye was closer to the central fixation dot, which participants were fixating immediately prior to the onset of a face, on half of all trials. This suggests that, when both eyes are visible in a face, the particular eye region that observers are looking at does not depend on the distance of an eye from the central fixation point, but on the distance of an eye from the face center. In line with this reasoning, a small but significant proportion of fixations also fell on the ears in profile faces during the later trial stages in Experiment 2. This differs from the frontal and mid-profile conditions, in which observers show virtually no interest in the ear regions. Taken together, these findings suggest that the facial features that are fixated in different face views also depend in part on their proximity to the face's center, independent of the initial center-of-gravity bias. 
Overall, however, the eye regions received the majority of fixations throughout the trial interval in all of the experimental conditions. This continuous concentration of fixations on the eye regions is interesting given the relatively long trial duration and considering that participants were encouraged to fixate any aspect of a face for as long as was deemed necessary according to their own preferences. As the eyes generally attracted a majority of fixations during the early stages of a trial, it would have been plausible for this to be followed by a period in which the eyes were avoided, to encourage the sampling of other face regions (see, e.g., Klein, 2000). Such a reduction in interest in the eye regions was observed between 400 and 1000 msec after stimulus onset in the mid-profile and profile conditions in the free-viewing task, but the eyes were still persistently the most fixated feature throughout this study, both across different viewpoints and across time. One way to account for this finding could be to appeal to the special status that the eyes hold for human social cognition (see, e.g., Baron-Cohen, Campbell, Kamiloff-Smith, Grant, & Walker, 1995; Kleinke, 1986; Langton, Watt, & Bruce, 2000), or perhaps a combination of this status and the central position of the eyes in a face. Alternatively, a more adventurous explanation could be that information processing at the eye region is particularly dependent on the high retinal acuity and resolution from foveal vision, and therefore requires direct fixation. In line with this reasoning, some recent studies suggest that gender and identity information is readily accessible from unattended faces (Bindemann et al., 2005; Bindemann, Jenkins, & Burton, 2007; Jenkins, Lavie, & Driver, 2003), while observers are not sensitive to finer-scale information from the eyes, such as gaze direction, outside the focus of attention (Burton, Bindemann, Langton, Schweinberger, & Jenkins, in press). This is consistent with the idea that the eyes may need to be fixated more directly than other facial features in order to obtain visual information from this particular region. 
At present, we can only speculate about these issues. The main finding of our study is that initial saccades to faces land on the geometric center of a face. This is consistent with the center-of-gravity effect in visual processing (see, e.g., Coren & Hoenig, 1972; Findlay, 1982), but has so far received little attention in the face domain. In the experiments presented here, this effect becomes apparent because faces are presented off-center and in different viewpoints. Importantly, we show that this affects the features that are initially looked at in a face, whereas subsequent eye fixations appeared to be driven by the specific facial features in which observers take an interest. At the moment, further work is needed to clarify how the center-of-gravity effect will affect other face tasks, but our results suggest, perhaps surprisingly, that this does not affect the ability to make gender decisions. We have illustrated these findings by providing the most detailed account of eye movements to faces to date, and the only comparison of different face views. 
Acknowledgments
This work was supported by an ESRC grant (RES-062-23-0389) to Mike Burton and Markus Bindemann. We are grateful to two anonymous reviewers for their helpful comments. 
Commercial relationships: none. 
Corresponding author: Markus Bindemann. 
Email: m.bindemann@psy.gla.ac.uk. 
Address: Department of Psychology, University of Glasgow, G12 8QQ, UK. 
References
Althoff, R. R. Cohen, N. J. (1999). Eye-movement-based memory effect: A reprocessing effect in face perception. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 997–1010. [PubMed] [CrossRef] [PubMed]
Baddeley, A. Woodhead, M. Llooyd-Bostock, S. Clifford, B. (1983). Improving face recognition ability. Evaluating witness evidence. (pp. 125–136). Chichester: Wiley.
Baron-Cohen, S. Campbell, R. Kamiloff-Smith, A. Grant, J. Walker, J. (1995). Are children with autism blind to the mentalistic significance of the eyes? Developmental Psychology, 12, 379–398. [CrossRef]
Baudouin, J. Y. Humphreys, G. W. (2006). Configural information in gender categorisation. Perception, 35, 531–540. [PubMed] [CrossRef] [PubMed]
Bindemann, M. Burton, A. M. Jenkins, R. (2005). Capacity limits for face processing. Cognition, 98, 177–197. [PubMed] [CrossRef] [PubMed]
Bindemann, M. Jenkins, R. Burton, A. M. (2007). A bottleneck in face identification: Repetition priming from flanker images. Experimental Psychology, 54, 192–201. [PubMed] [CrossRef] [PubMed]
Blais, C. Jack, R. E. Scheepers, C. Fiset, D. Caldara, R. (2008). Culture shapes how we look at faces. PLoS ONE, 3.
Bruce, V. Young, A. W. (1998). In the eye of the beholder: The science of face perception. Oxford: Oxford University Press.
Burke, D. Taubert, J. Higman, T. (2007). Are face representations viewpoint dependent A stereo advantage for generalizing across different views of faces. Vision Research, 47, 2164–2169. [PubMed] [CrossRef] [PubMed]
Burt, D. M. Perrett, D. I. (1997). Perceptual asymmetries in judgements of facial attractiveness, age, gender, speech and expression. Neuropsychologia, 35, 685–693. [PubMed] [CrossRef] [PubMed]
Burton, A. M. Bindemann, M. Langton, S. R. H. Schweinberger, S. R. Jenkins, R. (in press). Journal of Experimental Psychology: Human Perception and Performance.
Burton, A. M. Wilson, S. Cowan, M. Bruce, V. (1999). Face recognition in poor-quality video: Evidence from security surveillance. Psychological Science, 10, 243–248. [CrossRef]
Butler, S. Gilchrist, I. D. Burt, D. M. Perrett, D. I. Jones, E. Harvey, M. (2005). Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Neuropsychologia, 43, 52–59. [PubMed] [CrossRef] [PubMed]
Coren, S. Hoenig, P. (1972). Effect of non-target stimuli upon length of voluntary saccades. Perceptual and Motor Skills, 34, 499–508. [PubMed] [CrossRef] [PubMed]
Ellis, H. D. Young, A. W. Quayle, A. H. De Pauw, K. W. (1997). Reduced autonomic response to faces in Capgras delusion. Proceedings of the Royal Society B: Biological Sciences, 264, 1085–1092. [PubMed] [Article] [CrossRef]
Farkas, L. G. (1994). Anthropometry of the head and face. New York: Raven Press.
Fasel, I. Fortenberry, B. Movellan, J. (2005). A generative framework for real time object detection and classification. Computer Vision and Image Understanding, 98, 182–210. [CrossRef]
Findlay, J. M. Fisher,, D. E. Monty,, R. A. Senders, J. W. (1981). Local and global influences on saccadic eye movements. Eye movements: Cognition and visual perception. Hillsdale, NJ: Lawrence Erlbaum.
Findlay, J. M. (1982). Global visual processing for saccadic eye movements. Vision Research, 22, 1033–1045. [PubMed] [CrossRef] [PubMed]
Findlay, J. M. Gilchrist, I. D. (1997). Spatial scale and saccade programming. Perception, 26, 1159–1167. [PubMed] [CrossRef] [PubMed]
Fletcher-Watson, S. Findlay, J. M. Leekam, S. R. Benson, V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37, 571–583. [PubMed] [CrossRef] [PubMed]
Haith, M. M. Bergman, T. Moore, M. J. (1977). Eye contact and face scanning in early infancy. Science, 198, 853–855. [PubMed] [CrossRef] [PubMed]
Hancock, P. J. B. Bruce, V. Burton, A. M. (2000). Recognition of unfamiliar faces. Trends in Cognitive Sciences, 4, 330–337. [PubMed] [CrossRef] [PubMed]
He, P. Y. Kowler, E. (1989). The role of location probability in the programming of saccades: Implications for ‘center-of-gravity’ tendencies. Vision Research, 29, 1165–1181. [PubMed] [CrossRef] [PubMed]
Heisz, J. J. Shore, D. I. (2008). More efficient scanning for familiar faces. Journal of Vision, 8, (1):9, 1–10, http://journalofvision.org/8/1/9/, doi:10.1167/8.1.9. [PubMed] [Article] [CrossRef] [PubMed]
Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7, 498–504. [PubMed] [CrossRef] [PubMed]
Henderson, J. M. (2007). Regarding scenes. Current Directions in Psychological Science, 16, 219–222. [CrossRef]
Henderson, J. M. Williams, C. C. Falk, R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33, 98–106. [PubMed] [CrossRef] [PubMed]
Janik, S. W. Wellens, A. R. Goldberg, M. L. Dell'Osso, L. F. (1978). Eyes as the center of focus in the visual examination of human faces. Perceptual and Motor Skills, 47, 857–858. [PubMed] [CrossRef] [PubMed]
Jeffery, L. Rhodes, G. Busey, T. (2006). View-specific coding of face shape. Psychological Science, 17, 501–505. [PubMed] [CrossRef] [PubMed]
Jenkins, R. Burton, A. M. (2008a). Limitations in facial identification. Justice of the Peace, 172, 4–6.
Jenkins, R. Burton, A. M. (2008b). 100% accuracy in automatic face recognition. Science, 319, 435. [CrossRef]
Jenkins, R. Lavie, N. Driver, J. (2003). Ignoring famous faces: Category-specific dilution of distractor interference. Perception & Psychophysics, 65, 298–309. [PubMed] [CrossRef] [PubMed]
Klein, R. M. (2000). Inhibition of return. Trends in Cognitive Sciences, 4, 138–147. [PubMed] [CrossRef] [PubMed]
Kleinke, C. L. (1986). Gaze and eye contact: A research review. Psychological Bulletin, 100, 78–100. [PubMed] [CrossRef] [PubMed]
Langton, S. R. H. Watt, R. J. Bruce, V. (2000). Do the eyes have it Cues to the direction of social attention. Trends in Cognitive Sciences, 4, 50–58. [PubMed] [CrossRef] [PubMed]
Le Gal, P. M. Bruce, V. (2002). Evaluating the independence of sex and expression judgments of faces. Perception & Psychophysics, 64, 230–243. [PubMed] [CrossRef] [PubMed]
Lewis, M. B. Edmonds, A. J. (2003). Face detection: Mapping human performance. Perception, 32, 903–920. [PubMed] [CrossRef] [PubMed]
Li, S. Z. Zhang, Z. Q. (2004). FloatBoost learning and statistical face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 1112–1123. [PubMed] [CrossRef] [PubMed]
Liu, C. H. Chaudhuri, A. (2002). Reassessing the inline-formula>mml:math display="block">mml:msup>mml:mn>3/mml:mn>/mml:msup>mml:msub>mml:mo>//mml:mo>mml:mn>4/mml:mn>/mml:msub>/mml:math>/inline-formula> view effect in face recognition. Cognition, 83, 31–48. [PubMed] [CrossRef] [PubMed]
Luria, S. M. Strauss, M. S. (1978). Comparison of eye movements over faces in photographic positives and negatives. Perception, 7, 349–358. [PubMed] [CrossRef] [PubMed]
Megreya, A. M. Burton, A. M. (2006). Unfamiliar faces are not faces: Evidence from a matching task. Memory & Cognition, 34, 865–876. [PubMed] [CrossRef] [PubMed]
Morrison, D. J. Bruce, V. Burton, A. M. (2001). Understanding provoked overt recognition in prosopagnosia. Visual Cognition, 8, 47–65. [CrossRef]
Moscovitch, M. Winocur, G. Behrmann, M. (1997). What is special about face recognition Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9, 555–604. [CrossRef] [PubMed]
Newell, F. N. Chiroro, P. Valentine, T. (1999). Recognizing unfamiliar faces: The effects of distinctiveness and view. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 52, 509–534. [PubMed] [CrossRef]
O'Toole, A. J. Edelman, S. Bülthoff, H. H. (1998). Stimulus‐specific effects in face recognition over changes in viewpoint. Vision Research, 38, 2351–2363. [PubMed] [CrossRef] [PubMed]
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. [PubMed] [CrossRef] [PubMed]
Rayner, K. Pollatsek, A. (1989). The psychology of reading. Englewood Cliffs, NJ: Prentice-Hall.
Sinha, P. Balas, B. Ostrovsky, Y. Russell, R. (2006). Face recognition by humans: Nineteen results all computer vision researchers should know about. Proceedings of the IEEE, 94, 1948–1962. [CrossRef]
Viola, P. Jones, M. (2004). Robust real-time object detection. International Journal of Computer Vision, 57, 137–154. [CrossRef]
Walker-Smith, G. J. Gale, A. G. Findlay, J. M. (1977). Eye movement strategies involved in face perception. Perception, 6, 313–326. [PubMed] [CrossRef] [PubMed]
Figure 1
 
An example of the faces used in Experiment 1. Faces were presented in (from left to right) profile left, mid-profile left, frontal, mid-profile right, and profile right pose.
Figure 1
 
An example of the faces used in Experiment 1. Faces were presented in (from left to right) profile left, mid-profile left, frontal, mid-profile right, and profile right pose.
Figure 2
 
An illustration of the dispersion of fixations as a function of experimental condition across three time intervals, spanning 0–250 msec, 250–500 msec and 500–1000 msec from face onset. For illustration purposes, the fixations are superimposed on an example face from Experiment 1.
Figure 2
 
An illustration of the dispersion of fixations as a function of experimental condition across three time intervals, spanning 0–250 msec, 250–500 msec and 500–1000 msec from face onset. For illustration purposes, the fixations are superimposed on an example face from Experiment 1.
Figure 3
 
An illustration of the regions of interest used in the analysis (left eye, right eye, nose, mouth, left ear, right ear, hair, face-other). Note that the color coding corresponds to Figures 4 and 5.
Figure 3
 
An illustration of the regions of interest used in the analysis (left eye, right eye, nose, mouth, left ear, right ear, hair, face-other). Note that the color coding corresponds to Figures 4 and 5.
Figure 4
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 1.
Figure 4
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 1.
Figure 5
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views as a function of time in Experiment 1. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
Figure 5
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views as a function of time in Experiment 1. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
Figure 6
 
An illustration of the dispersion of fixations as a function of experimental condition across five time intervals, spanning 0–250 msec, 250–500 msec, 500–1000 msec, 1000–2000 msec and 2000–3000 msec from face onset. In addition, the fixations for the full trial period are shown (0–3000 msec).
Figure 6
 
An illustration of the dispersion of fixations as a function of experimental condition across five time intervals, spanning 0–250 msec, 250–500 msec, 500–1000 msec, 1000–2000 msec and 2000–3000 msec from face onset. In addition, the fixations for the full trial period are shown (0–3000 msec).
Figure 7
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 2.
Figure 7
 
The percentage of first fixations (top figure) and the area-normalized first fixation scores (bottom figure) to each ROI for the five face conditions in Experiment 2.
Figure 8
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views for the full trial interval in Experiment 2. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
Figure 8
 
The percentage fixations (left column) and area-normalized fixation scores (right column) for the five face views for the full trial interval in Experiment 2. Normalized scores with filled circles indicate ROIs that are fixated above chance ( p < 0.05).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×