Free
Article  |   February 2015
Gaze direction affects the magnitude of face identity aftereffects
Author Affiliations
Journal of Vision February 2015, Vol.15, 22. doi:https://doi.org/10.1167/15.2.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nadine Kloth, Linda Jeffery, Gillian Rhodes; Gaze direction affects the magnitude of face identity aftereffects. Journal of Vision 2015;15(2):22. https://doi.org/10.1167/15.2.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes. We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly.

Introduction
Faces contain important information about a person's identity, sex, ethnicity, approximate age, emotional state, and current focus of attention (Calder, Rhodes, Johnson, & Haxby, 2011). Being able to quickly and accurately perceive such information is essential for successful social interactions. Considering the capacity limitations of the visual system, the efficiency with which it processes the social cues from the many different faces we encounter is quite remarkable. Past research suggests that this efficiency is partly owed to mechanisms like attention and perceptual adaptation. On the one hand, attention focuses available resources on the processing of faces (Bindemann, Burton, Hooge, Jenkins, & De Haan, 2005; Langton, Law, Burton, & Schweinberger, 2008; for a review, see Palermo & Rhodes, 2007). On the other hand, perceptual adaptation flexibly adjusts the neural response characteristics of the visual system to the specific properties of currently perceived faces (for a recent review, see Webster, 2011). In this study, we investigated possible combined effects of attention and face adaptation, asking whether direct gaze, an attentionally engaging social signal in the face, enhances face adaptation. 
Perceptual adaptation constantly recalibrates the response properties of the face perception system to its current diet of faces (for a review, see Webster & MacLeod, 2011). This neural fine-tuning continuously adjusts which facial properties are perceived as normal and enables the system to distinguish even subtle deviations from this norm (Leopold, O'Toole, Vetter, & Blanz, 2001; Rhodes & Jeffery, 2006). Adaptive recalibration has been found to affect the perception of all key facial attributes, including face identity (Leopold et al., 2001; Rhodes & Jeffery, 2006), sex (Bestelmeyer et al., 2008; Kovacs et al., 2006; Pond et al., 2013; Webster, Kaping, Mizokami, & Duhamel, 2004), age (Schweinberger et al., 2010), eye gaze direction (Jenkins, Beaver, & Calder, 2006; Schweinberger, Kloth, & Jenkins, 2007; Seyama & Nagayama, 2006), viewpoint (Fang & He, 2005; Fang, Ijichi, & He, 2007), ethnicity and emotional expression (Webster et al., 2004), and even attractiveness (Hayn-Leichsenring, Kloth, Schweinberger, & Redies, 2013; Rhodes, Jeffery, Watson, Clifford, & Nakayama, 2003). 
Several lines of evidence point towards a functional role of adaptation in face perception. Face adaptation is reduced in populations with impaired face recognition (Fiorentini, Gray, Rhodes, Jeffery, & Pellicano, 2012; Palermo, Rivolta, Wilson, & Jeffery, 2011; Pellicano, Jeffery, Burr, & Rhodes, 2007; Pellicano, Rhodes, & Calder, 2013). In typical populations, individual differences in face recognition ability are positively correlated with the magnitude of aftereffects induced by face adaptation (Dennett, McKone, Edwards, & Susilo, 2012; Rhodes, Jeffery, Taylor, Hayward, & Ewing, 2014). Finally, face discrimination can be more accurate around an average than a nonaverage face (Armann, Jeffery, Calder, & Rhodes, 2011; Wilson, Loffler, & Wilkinson, 2002), suggesting that calibrating one's perceptual norm to the population average facilitates discrimination (Rhodes, Watson, Jeffery, & Clifford, 2010). 
A recent study has indicated that face adaptation is modulated by attention. Rhodes et al. (2011) showed that face identity aftereffects and face-distortion aftereffects were enhanced when task demands required increased levels of attention to the adapting faces. Specifically, they compared the size of aftereffects when participants had to respond to subtle changes in luminance levels within the eye and mouth regions of the adaptors (attention condition) to aftereffects induced in a standard paradigm with passive viewing of the adaptors. They found that both face identity aftereffects and face-distortion aftereffects were significantly larger in the attention condition than in the passive viewing condition, indicating that enhanced attention to the adaptors amplifies face aftereffects. This finding raises the intriguing possibility that factors which affect attention to faces in everyday situations may also influence our perceptual recalibration processes, so that not all faces have equal impact. 
Here we ask whether face adaptation can be modulated by naturally occurring social signals within the face that are known to affect participants' attention. Gaze direction may be a likely candidate for such a social cue, considering that it is very powerful in attracting and guiding the observer's attention (for a review, see Langton, Watt, & Bruce, 2000). Direct gaze signals another person's interest and approach-motivation (Adams & Kleck, 2003), and faces with direct gaze attract and hold our attention more strongly than those with averted gaze (Senju & Hasegawa, 2005; vonGrunau & Anston, 1995; Yokoyama, Ishibashi, Hongoh, & Kita, 2011). Other facial signals are also more easily processed when presented together with (task-irrelevant) direct than averted gaze. For instance, participants are faster to categorize faces according to sex (Macrae, Hood, Milne, Rowe, & Mason, 2002; Pellicano & Macrae, 2009; but see Vuilleumier, George, Lister, Armony, & Driver, 2005) and remember them better when they display direct rather than averted gaze (Adams, Pauker, & Weisbuch, 2010). 
Considering its ability to engage our attention and enhance face processing, we predicted that gaze direction would also modulate adaptation to a face. Specifically, we expected that faces with direct gaze would induce larger face identity aftereffects than faces with averted gaze. We tested this prediction in two experiments in which we varied the gaze direction of adapting faces and compared the size of the resulting face identity aftereffects. We also sought to rule out a possible alternative explanation for reduced aftereffects with averted gaze, namely that participants actually look less at the averted-gaze than the direct-gaze adaptors. Just as averted gaze can trigger reflexive attentional shifts away from the face (Driver et al., 1999; Friesen & Kingstone, 1998), it might also shift fixations away from the face. Any such shifts would reduce the time spent looking at the adaptors, which would also reduce the size of the aftereffects (Leopold, Rhodes, Muller, & Jeffery, 2005; Rhodes, Jeffery, Clifford, & Leopold, 2007). 
In Experiment 1, adapting faces were presented for 5 s, a duration commonly used to induce identity aftereffects (Leopold et al., 2001; Rhodes & Jeffery, 2006). In this experiment, we found no effect of gaze direction on the size of the face identity aftereffect. In Experiment 2, we reduced the presentation duration of the adapting faces to 1 s. We reasoned that attentional processes might be more likely to influence adaptation under conditions of limited processing resources (Carrasco, 2011), such as those induced by brief exposures. This second experiment revealed a significant effect of gaze direction on the magnitude of face aftereffects, but in the direction opposite to that predicted. Faces with averted gaze induced larger aftereffects than faces with direct gaze. 
Inspection of the stimuli (see Figure 1) suggested that faces with averted gaze actually looked more expressive and interesting than faces with direct gaze. To help interpret the differences in aftereffects between the two gaze directions observed in Experiment 2, we tested this observation by asking participants to rate the adapting faces for how expressive, interesting, and animate they appeared (Experiment 3). These ratings showed that faces with averted gaze were indeed perceived as more expressive and more interesting than faces with direct gaze. 
Figure 1
 
Stimulus examples for one of the four target identities. (a) Original 100% identity-strength face in the three gaze direction conditions. (b) 15% identity-strength versions of this identity used as test faces. (c) Antifaces of this identity used as adaptors.
Figure 1
 
Stimulus examples for one of the four target identities. (a) Original 100% identity-strength face in the three gaze direction conditions. (b) 15% identity-strength versions of this identity used as test faces. (c) Antifaces of this identity used as adaptors.
Experiment 1
Methods
Participants
Twenty-six White students from the University of Western Australia (17 female participants, mean age = 21 ± 8.4 years, range = 17–46 years) participated. In all three experiments described in this article, all participants reported normal or corrected-to-normal vision and were naïve to the purposes of the experiment. All participants gave written informed consent prior to the study and received course credit or payment for participation. Experiments were performed in adherence to the Declaration of Helsinki and were approved by the human research ethics committee of the University of Western Australia. 
Stimuli
We selected four easily discriminable target faces with direct gaze from a set of grayscale pictures of 20 young male faces that had been used in earlier studies (e.g., Rhodes & Jeffery, 2006; Rhodes et al., 2011). The face images were standardized for size by readjusting them to an interocular distance of 80 pixels. Using Adobe Photoshop CS3, the eye regions of the four target identities were then edited to produce two additional versions with obviously averted gaze (left and right direction, Figure 1a). 
Abrosoft FantaMorph 5 was used to create an average face based on the 20 identities from the original set. Two additional versions of this average face, one with left and one with right gaze direction, were created. An antiface was created for each target identity and each eye-gaze condition by caricaturing the structure of the average face away from the target face by 100%, keeping the texture of the average face (Figure 1c). In addition to the antifaces, low-identity-strength versions were created for each of the four target identities by morphing them towards the average face, to produce continua of identity strength ranging from the average face (0% identity strength) to each original target face (100% identity strength) at each gaze deviation. All face images were displayed behind a gray oval mask that was flattened over the forehead to occlude the ears and hairline (Figure 1). 
The twelve antifaces (4 target identities × 3 gaze directions) were used as adaptor stimuli. The 15% and 80% identity-strength versions of the four target identities with left, direct, and right eye gaze were used as test stimuli, resulting in 24 different test images (Figure 1b). Additionally, the 100% and 30% identity-strength versions of each target identity in each gaze direction condition were used during the identity training that preceded the adapting phase. 
To minimize potential contributions of low-level aftereffects to the high-level identity adaptation effects investigated here, test stimuli were presented at 75% the size of adaptor stimuli. Adaptor stimuli measured 300 × 400 pixels, and test stimuli measured 225 × 300 pixels (at a resolution of 72 pixels/in.), corresponding to approximately 11.4° × 15.1° and 8.6° × 11.3°, respectively, at a viewing distance of about 52 cm, which was ensured using a headrest. 
Apparatus
Stimuli were presented on a matte 27-in. iMac LED screen with a resolution of 1920 × 1080. The experimental procedure was programmed using SR Research Experiment Builder. Participants' responses were recorded on a standard computer keyboard. During the adaptation experiment, participants' eye movements were measured using an SR Research EyeLink 1000 eye-tracking system with a headrest setup and a sampling rate of 1000 Hz. 
Procedure
Training:
Participants were first trained to recognize the four target faces that they would be required to identify in the adaptation task. They subsequently received practice in identifying low-strength versions of these faces to ensure they would know how to respond in the adaptation task that required identification of primarily low-strength faces. The training consisted of five phases that were similar to previous studies (cf. Rhodes & Jeffery, 2006; Rhodes et al., 2011), slightly modified to ensure that participants were exposed to all gaze directions. In the initial phase, participants were familiarized with the four target identities and their names. Identities were presented separately and in random order, with pictures of the three different gaze directions presented simultaneously side by side. After this introduction, participants were asked to identify the four faces in a series of four recognition phases of increasing difficulty. In the first recognition phase, each of the four identities was presented six times in random order for an unlimited presentation duration. Participants were asked to indicate the identity of the face by pressing one of four labeled keys and received feedback after their response. The second recognition phase was similar to the first one, but faces were presented for 500 ms only rather than for an unlimited time. 
After the second recognition phase, 30% identity-strength versions of the original identities were introduced as the “brothers” of the original identities. In the third recognition phase, each of the four 30% identity-strength versions was presented six times in random order for an unlimited duration. Participants were asked to indicate the identity of the face per button press and received feedback after their response. The fourth recognition phase was similar to the third one, but faces were presented for 500 ms only rather than an unlimited time. Whenever necessary, training was repeated until participants reached a criterion of no more than eight errors out of the 24 trials in each recognition phase before they could move on to the main experiment. Out of the 26 tested participants, four failed to reach this criterion in one of the recognition phases and had to repeat it, and one additional participant had to repeat two phases. 
Adaptation:
At the beginning of each adaptation trial, an antiface was presented as an adaptor for 5000 ms. Participants were instructed to look at the adaptor faces for the whole time they were presented, to avoid missing the presentation of the following face. After an interstimulus interval of 150 ms, a 15% identity-strength test face was presented for 200 ms, followed by a blank screen during which participants indicated the identity of the test face by pressing one of four buttons that were labeled with the names of the target identities. After the response, the next trial was initiated automatically. 
Following earlier research on face-identity adaptation, our experimental design included two kinds of adaptation trials that allowed us to calculate face-identity aftereffects: match trials and mismatch trials (e.g., Fiorentini et al., 2012; Rhodes & Jeffery, 2006; Rhodes et al., 2011). In match trials, the adaptor face and the test face lie on the same identity trajectory. For instance, anti-Dan is presented as the adaptor and a 15% identity-strength version of Dan is presented as the test face. In mismatch trials, adaptor and test faces lie on different identity trajectories. For instance, anti-Ted is presented as the adaptor and the 15% identity-strength version of Dan is presented as the test face. Because adaptation biases participants to perceive the opposite of what they have been exposed to, adapting to anti-Dan should bias participants to perceive Dan in a subsequently presented face, whereas adapting to anti-Ted should bias participants to perceive Ted. Consequently, they should be more likely to correctly identity the 15% identity-strength version of Dan when it is preceded by anti-Dan rather than anti-Ted. Following this rationale, we quantified the face identity aftereffect by subtracting the proportion of correct identifications (e.g., of Dan) in mismatch trials from the proportion of correct identifications in match trials (cf. Fiorentini et al., 2012; Rhodes & Jeffery, 2006; Rhodes et al., 2011). We did not include an additional no-adaptation baseline, in which test faces are presented without any preceding adaptor faces. Therefore, our aftereffects do not reflect the absolute size of the shift in perception relative to an unadapted state. However, the magnitude of identity aftereffects established relative to a no-adaptation baseline correlates highly with aftereffects derived from subtracting performance in match trials and mismatch trials (Rhodes, Evangelista, & Jeffery, 2009). Importantly, by measuring aftereffects as the difference in performance between two adaptation conditions, in which perception is biased in different directions, we obtain resulting aftereffects that are uncontaminated by differences in attentional load, participants' motivation, or the amount of generic face adaptation that occurs when adaptation is measured relative to a no-adaptation baseline (cf. Kloth & Schweinberger, 2010; Kloth, Schweinberger, & Kovacs, 2010). 
We also varied the gaze direction of the adaptor stimuli. In half of the trials, adaptor faces displayed direct gaze; in the other half of the trials, adaptor faces displayed averted gaze (half left, half right). The gaze direction of the test face was always identical to that of the adaptor face. 
We presented 128 experimental trials, resulting from the combination of the experimental factors trial type (match, mismatch) and gaze direction (direct, averted) for the four test face identities, with eight repetitions per condition. An additional 64 trials with test faces that were easy to identify (80% identity strength) were included to help maintain participants' motivation and were not used to calculate aftereffects. There were four repetitions of each of the factor combinations for each test identity with 80% identity strength. Participants performed well on these trials (accuracy: M = 0.81, SEM = 0.02), confirming that they had successfully learned the test identities and remained capable of identifying them throughout the experiment. 
Eye movements were recorded during the adaptation phase. To ensure a high accuracy of these recordings, the experiment was preceded by a nine-point calibration and validation procedure of the eye tracker. Calibration was repeated if the maximum error was larger than 1°. An additional drift check was performed at the center of the screen after every eight trials. If this drift check indicated a loss in tracking accuracy, the full nine-point calibration and validation procedure was repeated before the next block of eight trials was presented. 
Results and discussion
The face identity aftereffect was calculated by subtracting identification accuracy in mismatch trials from identification accuracy in match trials (cf. Fiorentini et al., 2012; Rhodes & Jeffery, 2006; Rhodes et al., 2011). A paired-samples t test showed no significant difference between the sizes of the face-identity aftereffects induced by adaptors with direct gaze and averted gaze, t(25) = 0.83, p = 0.41, d = 0.16 (Figure 2a). Additional one-sample t tests confirmed that adaptors with direct and averted gaze both induced significant aftereffects, t(25) = 10.03, p < 0.001, d = 3.93, and t(25) = 10.40, p < 0.001, d = 4.08, respectively. To conclude, our prediction that adaptors with direct gaze would evoke larger aftereffects than adaptors with averted gaze was not supported. 
Figure 2
 
(a) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 1. (b) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 2.
Figure 2
 
(a) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 1. (b) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 2.
Eye-tracking data were analyzed to determine whether the gaze direction of the adaptors affected the amount of time participants spent looking at them. To this end, the area of the adaptor face was defined as a single region of interest (ROI), allowing us to determine whether fixations were on or off the face. Participants' dwell-time data were extracted and compared for faces with direct and averted gaze. Planned comparisons revealed that participants spent significantly longer looking at adaptors with direct gaze (M = 4207 ms, SEM = 79) than those with averted gaze (M = 4168 ms, SEM = 84), t(25) = 3.12, p = 0.005, d = 0.61. However, this difference in looking time did not appear to result in larger aftereffects for adaptors with direct than averted gaze. 
An additional, more detailed, analysis was run to compare dwell times to three different regions of interest within the adaptor faces—the eye region, the nose, and the mouth region—between direct- and averted-gaze adaptation conditions. The analysis confirmed that there were no significant differences in dwell time depending on adaptor gaze direction for the eye region, t(25) = 0.72, p = 0.48, d = 0.14, the nose, t(25) = 1.68, p = 0.11, d = 0.33, or the mouth region, t(25) = 0.70, p = 0.44, d = 0.16. 
Experiment 2
Our hypothesis that faces with direct gaze would induce larger aftereffects than faces with averted gaze was not supported in Experiment 1. Considering that attention is typically conceptualized as being related to limited resources (Carrasco, 2011), it is possible that adaptor gaze direction is more likely to affect adaptation when processing time is limited. We tested this possibility in Experiment 2, in which we reduced the presentation duration of the adapting faces to 1 s to determine whether gaze direction would have an impact on adaptation induced by more briefly presented adaptors. 
Methods
Participants
Twenty-six White students from the University of Western Australia (11 female participants, mean age = 19 ± 2.6 years, range = 17–30 years) participated. None of the participants had taken part in Experiment 1
Stimuli
The stimulus set was identical to the one used in Experiment 1
Procedure
The experimental design and procedure was identical to that used in Experiment 1 except that adaptors were presented for only 1 s. During the initial identity-learning stage, five out of the 26 participants had to repeat one training stage before they could move on to the next one. 
Results and discussion
A paired-samples t test showed that aftereffects were significantly larger when adaptors had averted gaze than when they had direct gaze, t(25) = 4.58, p < 0.001, d = 0.90 (Figure 2b). Additional one-sample t tests confirmed that adaptors with direct and averted gaze both induced significant aftereffects, t(25) = 4.63, p < 0.001, d = 1.82, and t(25) = 8.48, p < 0.001, d = 3.33, respectively. Our prediction that adaptors with direct gaze would evoke larger aftereffects than adaptors with averted gaze was not supported. Instead, adaptors with averted gaze elicited significantly larger face identity aftereffects than adaptors with direct gaze. The effect size indicates that this is a large effect (Cohen, 1988). 
Eye-tracking data were analyzed to determine whether adaptors with averted gaze were also looked at for longer than adaptors with direct gaze. Unlike in Experiment 1, we found no difference in dwell times for faces with direct (M = 885 ms, SEM = 12) versus averted gaze (M = 886 ms, SEM = 13), t(25) = 0.37, p = 0.72, d = 0.07. 
Separate exploratory ROI analyses comparing dwell times towards the eye, nose, and mouth regions for adaptors with direct and averted gaze revealed no significant differences for the eye region, t(25) = 1.61, p = 0.12, d = 0.32, or mouth region, t(25) = 0.02, p = 0.99, d < 0.01. However, participants spent about 15 ms longer looking at the nose ROI when adaptors had direct gaze (M = 175.5 ms, SEM = 39.5) compared to when they had averted gaze (M = 159.9 ms, SEM = 36.6). This difference is significant, t(25) = 2.35, p = 0.03, d = 0.46, but does not survive Bonferroni correction for three comparisons, which results in a corrected pcrit of 0.017. 
Combined analysis of Experiments 1 and 2
We conducted a combined analysis of the aftereffect data collected in Experiments 1 and 2 to directly test whether the effect of gaze direction on the magnitude of face identity aftereffects differed significantly between adaptation durations. Aftereffect measures were entered into a two-way ANOVA with adaptor gaze direction (direct, averted) as a within-participants factor and adaptation duration (5 s [Experiment 1], 1 s [Experiment 2]) as a between-participants factor. There was a significant main effect of adaptor gaze direction, F(1, 50) = 11.12, p = 0.002, ηp2 = 0.18, indicating larger aftereffects for adaptors with averted gaze (M = 0.30, SEM = 0.02) relative to those with direct gaze (M = 0.24, SEM = 0.02). A main effect of adaptation duration, F(1, 50) = 20.74, p < 0.001, ηp2 = 0.29, reflected larger aftereffects in the 5-s adaptation condition (M = 0.36, SEM = 0.03) than the 1-s adaptation condition (M = 0.18, SEM = 0.03), a finding that is consistent with earlier research (Leopold et al., 2005; Rhodes et al., 2007). The Gaze direction × Adaptation duration interaction approached significance, F(1, 50) = 3.90, p = 0.054, ηp2 = 0.07, weakly confirming that gaze direction tended to affect face adaptation more strongly for the shorter adaptation condition (Figure 2). 
Experiment 3
Experiment 2 revealed that the gaze direction of adaptors affects the size of the aftereffect. However, the direction of influence was opposite to our prediction, with averted-gaze adaptors evoking larger identity aftereffects than adaptors with direct gaze. Surprised by this finding, we inspected the adaptor stimuli more closely, looking for any potential differences in the appearance of the direct- and averted-gaze versions that might help us understand this result. Our subjective impression was that the faces with averted gaze looked more expressive: They seemed to display a certain cheekiness or slyness that was absent in the faces with direct gaze (a phenomenon that readers might experience for themselves when inspecting Figure 1). 
If averted-gaze adaptors look more expressive than direct-gaze faces, then they could engage our face perception system more strongly, and so generate larger aftereffects, than neutral faces (cf. Fairhall & Ishai, 2007; E. Fox, Russo, & Dutton, 2002; Mogg & Bradley, 1999; Vuilleumier & Pourtois, 2007). We sought to confirm our observation that the averted-gaze adaptors look more expressive than the direct-gaze adaptors by obtaining ratings of the perceived expressiveness and interest of all the adaptors. For exploratory purposes, we also included rating of animacy of the adaptors, since this is another factor that can modulate the engagement of the face perception system (Shultz & McCarthy, 2014). 
Methods
Participants
Seventeen White students from the University of Western Australia (12 female participants, age = 24 ± 6.5 years, range = 17–37 years) participated. 
Procedure
In separate blocks, participants rated the 12 antifaces used in Experiments 1 and 2 (4 identities × 3 gaze directions) for how expressive they look, how interesting they look, and how real they look (i.e., animacy rating). Ratings were made in this order, so that the rating of primary interest, perceived expressiveness, could not be influenced by the other ratings. When rating expressiveness, participants were told that the faces varied only subtly in their expressions. The exact wording of the instructions was as follows: 
 

Please rate each face for how expressive you find it on a scale from 1 (not expressive at all) to 7 (very expressive). Before you start, please note that none of the faces will be displaying very obvious expressions. Rather, you could think of this rating as a “poker face task”: The faces vary only very subtly in their expressions and we are interested in how well you are able to pick up on the presence of these subtle expressions. Please go with your gut feeling, respond spontaneously and try to use a large range of the scale.

 
Responses were made using a 7-point Likert scale ranging from 1 (not expressive/interesting/real-looking at all) to 7 (very expressive/interesting/real-looking). Each antiface was presented four times within each block, twice with direct gaze and twice with averted gaze (one left, one right). Each face was presented for 1 s, corresponding to the adaptation duration used in Experiment 2. Within each block, the order in which the 16 trials were presented was randomized. 
Results and discussion
Paired-samples t tests showed that faces with averted gaze were perceived as significantly more expressive, t(16) = 4.61, p < 0.001, d = 1.12, and more interesting, t(16) = 4.41, p < 0.001, d = 1.07, than faces with direct gaze (see Table 1). Interrater reliability for expression and interest ratings was high, Cronbach's α = 0.72 and 0.79, respectively, suggesting that similar ratings could be expected from other observers, such as those who participated in Experiments 1 and 2. Ratings of animacy did not differ between gaze conditions, t(16) = 0.42, p = 0.68, d = 0.10 (Table 1). 
Table 1
 
Mean (± standard error of the mean) ratings of expressiveness, interest, and animacy in response to faces with direct and averted gaze.
Table 1
 
Mean (± standard error of the mean) ratings of expressiveness, interest, and animacy in response to faces with direct and averted gaze.
Dimension Direct gaze Averted gaze
Expressiveness 3.2 (± 0.2) 4.0 (± 0.2)
Interest 3.6 (± 0.2) 4.6 (± 0.2)
Animacy 3.9 (± 0.2) 4.0 (± 0.2)
General discussion
We provide the first evidence that the gaze direction of a face can affect the amount of adaptation it induces. In a face-identity adaptation paradigm, adaptors with averted gaze induced larger aftereffects than adaptors with direct gaze. This finding supports our main hypothesis that naturally occurring facial signals can modulate the amount of adaptive recalibration in the face perception system. However, the direction of the effect was unexpected. Contrary to our specific prediction, faces with averted gaze induced larger aftereffects than faces with direct gaze. Ratings obtained for the adaptor stimuli suggest that this pattern was likely driven by the fact that participants perceived adaptors to be more expressive and more interesting when they had averted gaze than when they had direct gaze, possibly enhancing processing of averted-gaze faces. The effect of gaze direction on face adaptation was particularly pronounced for a shorter adaptation duration (1 s, Experiment 2) and was not apparent when adapting faces were presented for longer (5 s, Experiment 1). 
Our study demonstrates that increased interest due to social signals conveyed by a face can have a substantial effect on the magnitude of adaptation it induces. This finding extends previous work showing that aftereffects are increased by enhanced levels of attention via task demands (Rhodes et al., 2011) and suggests that natural face cues that modulate attentional engagement may also modulate adaptive face coding. We know that adaptation has a functional role in face perception (Dennett et al., 2012; Fiorentini et al., 2012; Rhodes et al., 2014) and that the continuous updating of a perceptual norm helps us to mentally represent and discriminate between large numbers of different faces (Armann et al., 2011; Rhodes et al., 2010; Valentine, 1991). The results of the present study suggest that the faces that surround us do not contribute to this important neural recalibration to the same extent. Rather, faces that we are most interested in appear to engage the perceptual mechanisms of our face perception system particularly strongly. 
Our findings are in line with the results of one earlier study that examined the effect of social signals on face adaptation. Presenting two adaptors simultaneously side by side in a distortion-aftereffect paradigm, Jones, DeBruine, and Little (2008) found that participants selectively adapted to the distortions of the more attractive of the two faces. The pattern disappeared when participants were instructed to attend to the location of the less attractive face, suggesting that the effect was driven by selective attention that systematically biased participants to look more at the attractive than the less attractive face (for related findings, see Leder, Tinio, Fuchs, & Bohrn, 2010). 
Unlike the findings of Jones et al. (2008), the eye-tracking data obtained in Experiments 1 and 2 suggest that the present effect of gaze direction on adaptation was not due to differences in the time participants spent looking at the faces. In Experiment 2, averted-gaze adaptors clearly induced stronger aftereffects than adaptors with direct gaze, but there were no differences in looking time for adaptors with averted and direct gaze. This pattern suggests that the effect of gaze direction on adaptation results from enhanced processing of the averted-gaze adaptors (cf. Fairhall & Ishai, 2007; Vuilleumier & Pourtois, 2007) rather than a mere difference in looking time between adaptors with direct and averted gaze. Conversely, in Experiment 1, adaptor gaze direction significantly affected the time that participants spent looking at adaptor faces, with direct-gaze adaptors being looked at slightly, but significantly, longer than averted-gaze adaptors. In this experiment, however, the gaze direction of the adaptor had no significant effect on the magnitude of the aftereffects it induced. Overall, the aftereffect and eye-tracking data obtained in Experiments 1 and 2 therefore suggest a dissociation between the effects of gaze direction on looking time and on aftereffect sizes between the two experiments. 
The specific effect of gaze direction on face adaptation observed here was unexpected. We initially hypothesized that faces with direct gaze would induce larger aftereffects than faces with averted gaze. This prediction was based on evidence that faces with direct gaze capture our attention (vonGrunau & Anston, 1995) and generally seem to engage the face processing system more deeply than faces with averted gaze (Hood, Macrae, Cole-Davies, & Dias, 2003; Macrae et al., 2002; Mason, Hood, & Macrae, 2004; Senju & Johnson, 2009; Young, Slepian, Wilson, & Hugenberg, 2014). 
In hindsight, however, averted gaze can also be an important and attention-grabbing signal. It plays an important role in several contexts. These include language acquisition, where averted gaze is used to refer to the object that goes along with a certain word (Hirotani, Stets, Striano, & Friederici, 2009; Houston-Price, Plunkett, & Duffy, 2006; Paulus & Fikkert, 2014), and mind reading, where averted gaze is used to infer a person's desires and interest (Langton et al., 2000; Lee, Eskritt, Symons, & Muir, 1998). When a person is directly facing us, as was the case with our stimuli, averted gaze signals an interruption of eye contact and the diversion of attention away from us. In this context, averted gaze might be highly salient and arousing because it can signal disinterest, social rudeness, or the presence of an attention-grabbing object in the environment. 
Importantly, follow-up ratings of the adaptors indicated that faces with averted gaze were perceived as more expressive and interesting than faces with direct gaze. This greater perceived expressiveness might have led to enhanced levels of attention and, consequently, to the increase in aftereffect size relative to adaptors with direct gaze observed in this study. In the case of emotional expressions, there is some intriguing evidence that fearful faces induce stronger face identity aftereffects than angry faces, providing further evidence that the communicative signals conveyed by a face can modulate identity adaptation (C. J. Fox, Oruc, & Barton, 2008). 
The present study clearly demonstrates that variations in gaze direction have the potential to amplify aftereffects. Nevertheless, it is important to consider that the meaning and evaluation of variations in facial signals largely depend on situational circumstances. We therefore consider it unlikely that faces with averted gaze will always induce stronger aftereffects than faces with direct gaze. Instead, it is plausible that under different circumstances, direct gaze direction might be perceived as particularly engaging and therefore amplify the amount of face adaptation. 
We note that left-averted (25% of the trials) and right-averted gaze (25% of the trials) were presented less frequently than faces with direct gaze (50% of the trials), both in the rating task and in the adaptation tasks. It is possible, therefore, that the greater novelty of left- and right-averted-gaze faces could have contributed to the perception of faces with left and right gaze as being more interesting than faces with direct gaze. However, averted-gaze faces were also perceived as significantly more expressive, a finding that is unlikely to be explained by differences in presentation frequency. If it were, the difference should not be apparent in inspection of Figure 1
The effect of gaze direction on the magnitude of face aftereffects was strong for the shorter adaptation duration of 1 s (Experiment 2) but was absent when adaptor faces were presented for longer (5 s, Experiment 1). This finding is in line with the general conceptualization of attention as a process that enhances processing particularly under conditions when resources are limited (Carrasco, 2011). It is plausible that there are no real processing limitations when adaptor faces are presented for an extended period of 5 s, therefore leaving little room for an enhancement of face adaptation induced by variations in social cues. When adaptors are only shown for 1 s, however, processing might be more limited and might profit from a more engaged processing due to enhanced attention. Similarly, aftereffects induced by a 1-s adaptation are known to be much smaller compared to longer adaptation durations (Rhodes et al., 2007), leaving more room for variations in gaze direction to enhance face adaptation. 
Overall, we have shown that a naturally occurring social signal in a face, gaze direction, can affect the amount of adaptation induced by the face—at least for shorter adaptation durations. Faces with averted gaze induced larger face identity aftereffects than faces with direct gaze. Ratings of the adaptor faces showed that faces with averted gaze were also perceived as significantly more expressive and interesting than faces with direct gaze. Eye tracking revealed no differences in the amount of looking time devoted to adaptors with averted and direct gaze. In sum, the present data suggest that the larger aftereffects induced by averted-gaze than –direct-gaze faces cannot be explained by simple differences in looking time, but rather result from an enhanced processing of the more expressive and interesting faces with averted gaze. 
Acknowledgments
This research was supported by the Australian Research Council Centre of Excellence in Cognition and its Disorders (CE110001021), an ARC Professorial Fellowship to GR (DP0877379), and an ARC Discovery Outstanding Researcher Award to GR (DP130102300). We are grateful for Andy Calder's helpful advice during the planning stage of this project. We further thank Eleni Avard for her help with stimulus editing and Stephen Pond for his assistance with data collection. 
Commercial relationships: none. 
Corresponding author: Nadine Kloth. 
Address: School of Psychology, University of Western Australia, Crawley, Western Australia, Australia. 
References
Adams R. B. Kleck R. E. (2003). Perceived gaze direction and the processing of facial displays of emotion. Psychological Science, 14 (6), 644–647. [CrossRef] [PubMed]
Adams R. B. Pauker K. Weisbuch M. (2010). Looking the other way: The role of gaze direction in the cross-race memory effect. Journal of Experimental Social Psychology, 46 (2), 478–481, doi:10.1016/j.jesp.2009.12.016. [CrossRef] [PubMed]
Armann R. Jeffery L. Calder A. J. Rhodes G. (2011). Race-specific norms for coding face identity and a functional role for norms. Journal of Vision, 11 (13): 9, 1–14, http://www.journalofvision.org/content/11/13/9, doi:10.1167/11.13.9. [PubMed] [Article]
Bestelmeyer P. E. G. Jones B. C. DeBruine L. M. Little A. C. Perrett D. I. Schneider A. Conway C. A. (2008). Sex-contingent face aftereffects depend on perceptual category rather than structural encoding. Cognition, 107 (1), 353–365, doi:10.1016/j.cognition.2007.07.018. [CrossRef] [PubMed]
Bindemann M. Burton A. M. Hooge I. T. C. Jenkins R. De Haan E. H. F. (2005). Faces retain attention. Psychonomic Bulletin & Review, 12 (6), 1048–1053. [CrossRef] [PubMed]
Calder A. J. Rhodes G. Johnson M. Haxby J. V. (Eds.). (2011). The Oxford handbook of face perception. New York: Oxford University Press.
Carrasco M. (2011). Visual attention: The past 25 years. Vision Research, 51 (13), 1484–1525, doi:10.1016/J.Visres.2011.04.012. [CrossRef] [PubMed]
Cohen J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates.
Dennett H. W. McKone E. Edwards M. Susilo T. (2012). Face aftereffects predict individual differences in face recognition ability. Psychological Science, 23 (11), 1279–1287, doi:10.1177/0956797612446350. [CrossRef] [PubMed]
Driver J. Davis G. Ricciardelli P. Kidd P. Maxwell E. Baron-Cohen S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6 (5), 509–540. [CrossRef]
Fairhall S. L. Ishai A. (2007). Effective connectivity within the distributed cortical network for face perception. Cerebral Cortex, 17 (10), 2400–2406, doi:10.1093/Cercor/Bhl148. [CrossRef] [PubMed]
Fang F. He S. (2005). Viewer-centered object representation in the human visual system revealed by viewpoint aftereffects. Neuron, 45 (5), 793–800, doi:10.1016/J.Neuron.2005.01.037. [CrossRef] [PubMed]
Fang F. Ijichi K. He S. (2007). Transfer of the face viewpoint aftereffect from adaptation to different and inverted faces. Journal of Vision, 7 (13): 16, 1–9, http://www.journalofvision.org/content/7/13/16, doi:10.1167/7.13.6. [PubMed] [Article] [PubMed]
Fiorentini C. Gray L. Rhodes G. Jeffery L. Pellicano E. (2012). Reduced face identity aftereffects in relatives of children with autism. Neuropsychologia, 50 (12), 2926–2932, doi:10.1016/j.neuropsychologia.2012.08.019. [CrossRef] [PubMed]
Fox C. J. Oruc I. Barton J. J. S. (2008). It doesn't matter how you feel: The facial identity aftereffect is invariant to changes in facial expression. Journal of Vision, 8 (3): 11, 1–13, http://www.journalofvision.org/content/8/3/11, doi:10.1167/8.3.11. [PubMed] [Article] [PubMed]
Fox E. Russo R. Dutton K. (2002). Attentional bias for threat: Evidence for delayed disengagement from emotional faces. Cognition & Emotion, 16 (3), 355–379. [CrossRef] [PubMed]
Friesen C. K. Kingstone A. (1998). The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5 (3), 490–495. [CrossRef]
Hayn-Leichsenring G. U. Kloth N. Schweinberger S. R. Redies C. (2013). Adaptation effects to attractiveness of face photographs and art portraits are domain-specific. I-Perception, 4 (5), 303–316, doi:10.1068/i0583. [CrossRef] [PubMed]
Hirotani M. Stets M. Striano T. Friederici A. D. (2009). Joint attention helps infants learn new words: Event-related potential evidence. NeuroReport, 20 (6), 600–605. [CrossRef] [PubMed]
Hood B. M. Macrae C. N. Cole-Davies V. Dias M. (2003). Eye remember you: The effects of gaze direction on face recognition in children and adults. Developmental Science, 6 (1), 67–71. [CrossRef]
Houston-Price C. Plunkett K. Duffy H. (2006). The use of social and salience cues in early word learning. Journal of Experimental Child Psychology, 95 (1), 27–55. [CrossRef] [PubMed]
Jenkins R. Beaver J. D. Calder A. J. (2006). I thought you were looking at me: Direction-specific aftereffects in gaze perception. Psychological Science, 17 (6), 506–513. [CrossRef] [PubMed]
Jones B. C. DeBruine L. M. Little A. C. (2008). Adaptation reinforces preferences for correlates of attractive facial cues. Visual Cognition, 16 (7), 849–858, doi:10.1080/13506280701760811. [CrossRef]
Kloth N. Schweinberger S. R. (2010). Electrophysiological correlates of eye gaze adaptation. Journal of Vision, 10 (12): 17, 1–13, http://www.journalofvision.org/content/10/12/17, doi:10.1167/10.12.17. [PubMed] [Article]
Kloth N. Schweinberger S. R. Kovacs G. (2010). Neural correlates of generic versus gender-specific face adaptation. Journal of Cognitive Neuroscience, 22 (10), 2345–2356. [CrossRef] [PubMed]
Kovacs G. Zimmer M. Banko E. Harza I. Antal A. Vidnyanszky Z. (2006). Electrophysiological correlates of visual adaptation to faces and body parts in humans. Cerebral Cortex, 16 (5), 742–753, doi:10.1093/cercor/bhj020. [PubMed]
Langton S. R. H. Law A. S. Burton A. M. Schweinberger S. R. (2008). Attention capture by faces. Cognition, 107 (1), 330–342, doi:10.1016/j.cognition.2007.07.012. [CrossRef] [PubMed]
Langton S. R. H. Watt R. J. Bruce V. (2000). Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, 4 (2), 50–59. [CrossRef] [PubMed]
Leder H. Tinio P. P. L. Fuchs I. M. Bohrn I. (2010). When attractiveness demands longer looks: The effects of situation and gender. Quarterly Journal of Experimental Psychology, 63 (9), 1858–1871, doi:10.1080/17470211003605142. [CrossRef]
Lee K. Eskritt M. Symons L. A. Muir D. (1998). Children's use of triadic eye gaze information for “mind reading.” Developmental Psychology, 34 (3), 525–539. [CrossRef] [PubMed]
Leopold D. A. O'Toole A. J. Vetter T. Blanz V. (2001). Prototype-referenced shape encoding revealed by high-level after effects. Nature Neuroscience, 4 (1), 89–94. [CrossRef] [PubMed]
Leopold D. A. Rhodes G. Muller K. M. Jeffery L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society B: Biological Sciences, 272 (1566), 897–904, doi:10.1098/rspb.2004.3022. [CrossRef]
Macrae C. N. Hood B. M. Milne A. B. Rowe A. C. Mason M. F. (2002). Are you looking at me? Eye gaze and person perception. Psychological Science, 13 (5), 460–464. [CrossRef] [PubMed]
Mason M. F. Hood B. M. Macrae C. N. (2004). Look into my eyes: Gaze direction and person memory. Memory, 12 (5), 637–643, doi:10.1080/09658210344000152. [CrossRef] [PubMed]
Mogg K. Bradley B. P. (1999). Orienting of attention to threatening facial expressions presented under conditions of restricted awareness. Cognition & Emotion, 13 (6), 713–740. [CrossRef]
Palermo R. Rhodes G. (2007). Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia, 45 (1), 75–92, doi:10.1016/j.neuropsychologia.2006.04.025. [CrossRef] [PubMed]
Palermo R. Rivolta D. Wilson C. E. Jeffery L. (2011). Adaptive face space coding in congenital prosopagnosia: Typical figural aftereffects but abnormal identity aftereffects. Neuropsychologia, 49 (14), 3801–3812, doi:10.1016/j.neuropsychologia.2011.02.021. [CrossRef] [PubMed]
Paulus M. Fikkert P. (2014). Conflicting social cues: Fourteen- and 24-month-old infants' reliance on gaze and pointing cues in word learning. Journal of Cognition and Development, 15 (1), 43–59. [CrossRef]
Pellicano E. Jeffery L. Burr D. Rhodes G. (2007). Abnormal adaptive face-coding mechanisms in children with autism spectrum disorder. Current Biology, 17 (17), 1508–1512, doi:10.1016/j.cub.2007.07.065. [CrossRef] [PubMed]
Pellicano E. Macrae C. N. (2009). Mutual eye gaze facilitates person categorization for typically developing children, but not for children with autism. Psychonomic Bulletin & Review, 16 (6), 1094–1099. [CrossRef] [PubMed]
Pellicano E. Rhodes G. Calder A. J. (2013). Reduced gaze aftereffects are related to difficulties categorising gaze direction in children with autism. Neuropsychologia, 51 (8), 1504–1509. [CrossRef] [PubMed]
Pond S. Kloth N. McKone E. Jeffery L. Irons J. Rhodes G. (2013). Aftereffects support opponent coding of face gender. Journal of Vision, 13 (14): 16, 1–19, http://www.journalofvision.org/content/13/14/16, doi:10.1167/13.14.16. [PubMed] [Article]
Rhodes G. Evangelista E. Jeffery L. (2009). Orientation-sensitivity of face identity aftereffects. Vision Research, 49 (19), 2379–2385. [CrossRef] [PubMed]
Rhodes G. Jeffery L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46 (18), 2977–2987, doi:10.1016/j.visres.2006.03.002. [CrossRef] [PubMed]
Rhodes G. Jeffery L. Clifford C. W. G. Leopold D. A. (2007). The timecourse of higher-level face aftereffects. Vision Research, 47 (17), 2291–2296, doi:10.1016/j.visres.2007.05.012. [CrossRef] [PubMed]
Rhodes G. Jeffery L. Evangelista E. Ewing L. Peters M. Taylor L. (2011). Enhanced attention amplifies face adaptation. Vision Research, 51 (16), 1811–1819. [CrossRef] [PubMed]
Rhodes G. Jeffery L. Taylor L. Hayward W. G. Ewing L. (2014). Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability. Journal of Experimental Psychology: Human Perception and Performance, 40 (3), 897–903, doi:10.1037/a0035939. [CrossRef] [PubMed]
Rhodes G. Jeffery L. Watson T. L. Clifford C. W. G. Nakayama K. (2003). Fitting the mind to the world: Face adaptation and attractiveness aftereffects. Psychological Science, 14 (6), 558–566. [CrossRef] [PubMed]
Rhodes G. Watson T. L. Jeffery L. Clifford C. W. G. (2010). Perceptual adaptation helps us identify faces. Vision Research, 50 (10), 963–968, doi:10.1016/j.visres.2010.03.003. [CrossRef] [PubMed]
Schweinberger S. R. Kloth N. Jenkins R. (2007). Are you looking at me? Neural correlates of gaze adaptation. NeuroReport, 18 (7), 693–696. [CrossRef] [PubMed]
Schweinberger S. R. Zaske R. Walther C. Golle J. Kovacs G. Wiese H. (2010). Young without plastic surgery: Perceptual adaptation to the age of female and male faces. Vision Research, 50 (23), 2570–2576, doi:10.1016/j.visres.2010.08.017. [CrossRef] [PubMed]
Senju A. Hasegawa T. (2005). Direct gaze captures visuospatial attention. Visual Cognition, 12 (1), 127–144, doi:10.1080/13506280444000157. [CrossRef]
Senju A. Johnson M. H. (2009). The eye contact effect: Mechanisms and development. Trends in Cognitive Sciences, 13 (3), 127–134. [CrossRef] [PubMed]
Seyama J. Nagayama R. S. (2006). Eye direction aftereffect. Psychological Research–Psychologische Forschung, 70 (1), 59–67, doi:10.1007/s00426-004-0188-3. [CrossRef]
Shultz S. McCarthy G. (2014). Perceived animacy influences the processing of human-like surface features in the fusiform gyrus. Neuropsychologia, 60, 115–120, doi:10.1016/j.neuropsychologia.2014.05.019. [CrossRef] [PubMed]
Valentine T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology, Section A: Human Experimental Psychology, 43 (2), 161–204. [CrossRef]
vonGrunau M. Anston C. (1995). The detection of gaze direction: A stare-in-the-crowd effect. Perception, 24 (11), 1297–1313. [PubMed]
Vuilleumier P. George N. Lister V. Armony J. Driver J. (2005). Effects of perceived mutual gaze and gender on face processing and recognition memory. Visual Cognition, 12 (1), 85–101, doi:10.1080/13506280444000120. [CrossRef]
Vuilleumier P. Pourtois G. (2007). Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia, 45 (1), 174–194, doi:10.1016/j.neuropsychologia.2006.06.003. [CrossRef] [PubMed]
Webster M. A. (2011). Adaptation and visual coding. Journal of Vision, 11 (5): 3, 1–23, http://www.journalofvision.org/content/11/5/3, doi:10.1167/11.5.3. [PubMed] [Article]
Webster M. A. Kaping D. Mizokami Y. Duhamel P. (2004). Adaptation to natural facial categories. Nature, 428 (6982), 557–561, doi:10.1038/nature02420. [CrossRef] [PubMed]
Webster M. A. MacLeod D. I. A. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 366 (1571), 1702–1725, doi:10.1098/rstb.2010.0360. [CrossRef]
Wilson H. R. Loffler G. Wilkinson F. (2002). Synthetic faces, face cubes, and the geometry of face space. Vision Research, 42 (27), 2909–2923, doi:10.1016/S0042-6989(02)00362-0. [CrossRef] [PubMed]
Yokoyama T. Ishibashi K. Hongoh Y. Kita S. (2011). Attentional capture by change in direct gaze. Perception, 40 (7), 785–797, doi:10.1068/p7003. [CrossRef] [PubMed]
Young S. G. Slepian M. L. Wilson J. P. Hugenberg K. (2014). Averted eye-gaze disrupts configural face encoding. Journal of Experimental Social Psychology, 53, 94–99, doi:10.1016/j.jesp.2014.03.002. [CrossRef]
Figure 1
 
Stimulus examples for one of the four target identities. (a) Original 100% identity-strength face in the three gaze direction conditions. (b) 15% identity-strength versions of this identity used as test faces. (c) Antifaces of this identity used as adaptors.
Figure 1
 
Stimulus examples for one of the four target identities. (a) Original 100% identity-strength face in the three gaze direction conditions. (b) 15% identity-strength versions of this identity used as test faces. (c) Antifaces of this identity used as adaptors.
Figure 2
 
(a) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 1. (b) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 2.
Figure 2
 
(a) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 1. (b) Size of the face identity aftereffect for adaptors with direct and averted gaze as observed in Experiment 2.
Table 1
 
Mean (± standard error of the mean) ratings of expressiveness, interest, and animacy in response to faces with direct and averted gaze.
Table 1
 
Mean (± standard error of the mean) ratings of expressiveness, interest, and animacy in response to faces with direct and averted gaze.
Dimension Direct gaze Averted gaze
Expressiveness 3.2 (± 0.2) 4.0 (± 0.2)
Interest 3.6 (± 0.2) 4.6 (± 0.2)
Animacy 3.9 (± 0.2) 4.0 (± 0.2)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×