Free
Research Article  |   March 2010
Looking away from faces: Influence of high-level visual processes on saccade programming
Author Affiliations
  • Stéphanie M. Morand
    Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, Glasgow, UKs.morand@psy.gla.ac.uk
  • Marie-Hélène Grosbras
    Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, Glasgow, UKm.grosbras@psy.gla.ac.uk
  • Roberto Caldara
    Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, Glasgow, UKr.caldara@psy.gla.ac.uk
  • Monika Harvey
    Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, Glasgow, UKm.harvey@psy.gla.ac.uk
Journal of Vision March 2010, Vol.10, 16. doi:https://doi.org/10.1167/10.3.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stéphanie M. Morand, Marie-Hélène Grosbras, Roberto Caldara, Monika Harvey; Looking away from faces: Influence of high-level visual processes on saccade programming. Journal of Vision 2010;10(3):16. https://doi.org/10.1167/10.3.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

Introduction
A face is a salient stimulus conveying crucial information for social interactions and several lines of evidence have suggested that human faces capture attention much more than other stimulus categories (see Palermo & Rhodes, 2007 for review): developmental studies have shown that newborns and infants track faces preferentially over scrambled faces and inverted faces (Johnson, Dziurawiec, Ellis, & Morton, 1991; Morton & Johnson, 1991; but see Simion, Leo, Turati, Valenza, & Dalla Barba, 2007). Moreover, patients with hemispatial visual neglect are more sensitive to faces in their neglected hemifield than to other stimuli (Vuilleumier, 2000). Numerous behavioral studies have further implied a special capacity of faces to recruit attention. Changes to faces are detected both more rapidly and more accurately than changes to objects when competing for attentional resources (Palermo & Rhodes, 2003; Ro, Russell, & Lavie, 2001). It has also been reported that faces “hold” attention more than other objects in spatial cueing tasks, i.e., it is harder to move attention away from faces (Bindemann, Burton, & Jenkins, 2005), although this effect can be modulated by manipulating stimulus configuration, observer expectation, and cue predictiveness (Bindemann, Burton, Langton, Schweinberger, & Doherty, 2007). 
In recent visual search studies, it has further been shown that faces can be searched for efficiently (i.e., pop out) when presented among different types of non-face objects (Hershler & Hochstein, 2005, 2006; VanRullen, 2006), contradicting previous research that failed to show pop out, albeit under different conditions (Brown, Huey, & Findlay, 1997; Kuehn & Jolicoeur, 1994; Nothdurft, 1993; Purcell, Stewart, & Skov, 1996). Using realistic human faces and a variety of photographs of objects, Hershler and Hochstein (2005) found a visual search asymmetry advantage of faces over cars and houses (although not animal faces), suggesting that this pop-out effect might be face specific. When they replaced the original photographs by scrambled images in which the whole facial configuration was disrupted, the pop-out effect disappeared, leading the authors to conclude that the pop-out effect reflects high-level, holistic parallel processing of faces. Yet, while replicating the original finding by Hershler and Hochstein, VanRullen (2006) reported that impairing the holistic processing by inverting faces had only a minor effect on search performance; he argued that the face pop-out effect may be explained by low-level differences between faces and other categories, such as the Fourier amplitude spectrum (see also Honey, Kirchner, & VanRullen, 2008). Thus there is still a controversy as to whether the face pop-out effect is driven by low-level visual factors or depends on high-level mechanisms. 
Here, we used the anti-saccade paradigm (Hallett, 1978) to investigate whether biases toward faces rely on automatic (involuntary), stimulus driven or voluntary (task driven) responses. In this task, participants are required to saccade away from a visual object, that is, to generate an eye movement in the opposite direction of the stimulus, to its mirror position. To correctly perform an anti-saccade, participants have to inhibit the stimulus-driven response to the target and instead generate a voluntary orienting response in the opposite direction (Connolly, Goodale, Desouza, Menon, & Villis, 2000; Hallett, 1978). Thus, the anti-saccade task is a classic paradigm to assess voluntary control over stimulus input (Everling & Fisher, 1998; Munoz & Everling, 2004). Anti-saccade error rates, i.e., the number of involuntary saccades toward the stimulus, reflect the orienting response that is beyond the control of the participant. Pro-saccade and anti-saccade trials can be randomly interleaved in the same block of trials and the instruction to which type of eye movement to generate can be conveyed by a symbolic cue at the fixation point. Although this paradigm is widely used in studies of visual attention and clinical neuroscience, it has (as far as we are aware) only once been applied to investigate face processing, despite being a most sensitive means of dissociating voluntary from automatic face responses. 
Gilchrist and Proske (2006) investigated how the visual properties of a face stimulus influence saccadic programming. In a paradigm similar to the one described above, participants were instructed to perform saccades away from upright and inverted face stimuli. The authors found an increase in the anti-saccade error rate for upright compared to inverted faces. As the overall low-level visual properties of the upright and inverted face stimuli were identical and only the high-level processing between these stimuli changed, the authors concluded that the involuntary saccadic orienting response was influenced by the high-level visual properties of the stimulus. However, their choice of using face stimuli only prevented a definite conclusion in terms of an advantage of involuntary orienting for faces compared to other (non-face) high-level visual objects. 
In the present study, we elaborated on the findings by Gilchrist and Proske (2006) by assessing whether the increased error rates are indeed specific to faces or whether they apply to other high-level objects. To this end, we compared face images with front-view car images to maintain visual homogeneity between high-level categories. Importantly, to exclude the possibility of low-level confounds driving our results, we used well-controlled stimuli (faces, cars, and noise patterns) of identical amplitude spectra and contrast yet with different phase content (e.g., Rousselet, Husk, Bennett, & Sekuler, 2008; Vizioli, Foreman, Rousselet, & Caldara, 2010). As form information is carried mainly by phase rather than amplitude (Oppenheim & Lim, 1981; Sekuler & Bennett, 1996), stimuli remained discriminable and semantically relevant even after scrambling (Figure 1). This method ensured that any differences in saccadic performance cannot rely on differences in luminance, contrast, orientation, or spatial frequency components across stimuli. 
Figure 1
 
Schematic representation of the anti-saccade task. (A) Examples of the images used. See Methods section for image processing details. (B) Within a block, subjects were asked to generate either a pro-saccade (PS) or an anti-saccade (AS) depending on the color of the cue: a green dot indicated that the participant had to perform a saccade toward the stimulus (PS) while a red dot instructed the participant to generate a saccade in the opposite direction (AS). Stimuli were faces, cars, and noise phase scrambled patterns as shown in (A) and were presented in random order.
Figure 1
 
Schematic representation of the anti-saccade task. (A) Examples of the images used. See Methods section for image processing details. (B) Within a block, subjects were asked to generate either a pro-saccade (PS) or an anti-saccade (AS) depending on the color of the cue: a green dot indicated that the participant had to perform a saccade toward the stimulus (PS) while a red dot instructed the participant to generate a saccade in the opposite direction (AS). Stimuli were faces, cars, and noise phase scrambled patterns as shown in (A) and were presented in random order.
We therefore asked subjects to generate either pro- or anti-saccades to complex high-level stimuli while keeping their global low-level visual properties constant. We hypothesized that if face processing is beyond the control of the observer, faces should elicit higher anti-saccade error rates than other stimuli. Indeed, we report here a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, faster pro-saccades to faces as well as shorter fixation durations for pro-saccades to faces. These results indicate that human faces generate stronger involuntary responses than other visual objects. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors. 
Methods
Subjects
Twenty-one naive subjects (11 females and 10 males) participated in this experiment. Twenty were right handed according to the Oldfield–Edinburgh questionnaire (Oldfield, 1971). All participants were between 20 and 36 years old (mean age = 28, SD = 4.8) and had normal or corrected-to-normal visual acuity. Subjects were tested on two different days, for a total of 12 blocks. They were recruited from the University of Glasgow, gave their written informed consent, and were paid for participation. The project was approved by the local Ethics Committee. 
Stimuli
We used a set of 12 neutral Western Caucasian faces (as used in Michel, Rossion, Han, Chung, & Caldara, 2006) and 12 cars (database from Schweinberger, Kaufmann, Moratti, Keil, & Burton, 2007) and 12 noise phase scrambled patterns as control stimuli. Faces consisted of 6 male and 6 female faces and were cropped within a common oval frame. Faces and cars (image size: 128 × 128 pixels, 8 bits/pixel) were front-view grayscaled photographs pasted onto a uniform gray background. Sample displays are shown in Figure 1A. The photographs subtended a visual angle of approximately 4.20° × 3.8° and appeared in each trial at one of two possible peripheral locations on the horizontal meridian, at 10° either to the left or right of the center of the screen. A central black cross with a size of 0.8° on a white background served as a fixation point. 
To control for low-level visual properties between stimuli, faces and car images were equated for spatial frequency, luminance, and contrast. To this end, the average amplitude spectrum of all images in the data set was calculated first and the phase of each image was then combined to this average amplitude spectrum. As a result, face and car stimuli were normalized for their amplitude spectra. Noise patterns were generated by randomizing the phase of the normalized face and car images. Finally, the RMS contrast was also normalized for all the images (see Figure 1A), resulting in stimuli normalized for their global low-level visual properties. 
Apparatus
Displays were presented on a gamma-corrected 21″ SONY GDMF520 CRT monitor with 1024 × 768 pixel resolution and 85-Hz refresh rate using E-prime 1.1. The monitor was located at 68 cm from the chinrest. A second PC was used to record eye position data online. Eye movements were monitored with a video-infrared eye tracker (EyeLink 2K SR Research, Mississauga, Canada, spatial resolution of 0.01°). The system uses the center of the pupil and corneal reflection technique to define pupil position. Eye movements were recorded at 1000 Hz. At the beginning of each trial, the experimenter monitored the subject's eye movement and initiated the stimulus presentation as soon as the eyes were stabilized on the fixation point. Trials with an initial fixation larger than 0.5° away from the fixation cross were excluded offline from the analysis. 
Experimental paradigm
Subjects had to generate a saccade either in the direction of the stimulus appearing on the screen (pro-saccade) or in the opposite direction away from the stimulus (anti-saccade). A schematic representation of the anti-saccade paradigm is given in Figure 1B. Each trial was initiated by the presentation of a fixation cross. Then, a cue (0.8° in size) whose color instructed the participant to generate either a pro-saccade (green dot) or an anti-saccade (red dot) was presented for 200 ms in the center of the screen followed by a gap interval of 100 ms (blank screen). This was done as the removal of the fixation point before target onset reduces reaction times for both pro- and anti-saccades and further increases the difficulty of anti-saccades (Munoz & Everling, 2004). The stimulus then appeared for 1000 ms. Subjects were asked to perform the correct eye movement as quickly as possible. 
Participants completed 12 blocks of 60 trials (total of 720 trials). Within a block, each face, car, and noise scrambled pattern was presented 5 times with pro- and anti-saccade instructions, in a randomized order to the left and to the right of the fixation cross located in the center of the screen. 
Each of the 12 blocks started with a nine-point grid calibration and validation procedure to ensure accurate eye tracking. Participants were asked to saccade to a gray, circular disk that appeared sequentially (but unpredictably) in a 3 × 3 grid. After a satisfactory validation had been obtained, a block of trials was run. Prior to the first block, participants were shown a 20-trial demonstration of the task. 
Data analysis
Data were analyzed offline with the Data Viewer Software (SR Research, Mississauga, ON, Canada). Saccades were detected using velocity and acceleration criteria of 30°/s and 8000°/s 2. Only the first saccade after stimulus display onset was analyzed. Trials were discarded if (1) the saccade latency was shorter than 80 ms or (2) the amplitude of the first saccade was less than 2°. On average, 5.14% of the total number of trials presented a first saccade with a latency shorter than 80 ms and 4.18% with an amplitude lower than 2°. These criteria led to an exclusion of an average of 6.9% of trials per subject. Discarded trials were equally distributed across conditions and stimuli. 
A 2 × 2 × 3 repeated measures ANOVA with task (pro-, anti-saccade instruction), side (left, right) and stimulus type (faces, cars, noise patterns) as factors was carried out on the following dependent variables: error rates, saccade latency, fixation duration after the first saccade, and finally saccade amplitude. 
Results
Error rates (incorrect pro-saccades)
Initially we assessed the error rates for both pro- and anti-saccades for the three stimulus types and the two sides. A 3-way analysis of variance (ANOVA) revealed significant main effects of task type ( F(1,20) = 60.7, p < 0.0001) showing greater errors for anti-saccades and stimulus type ( F(2,40) = 5.20, p < 0.01) showing greater errors for faces compared to the other stimuli, but no effect of side. As no side effects were found on any of the independent variables, left and right stimuli of the same type and task conditions were merged in all the following results. 
The error rate results for both pro- and anti-saccades are presented in Figures 2A and 2B. As expected, the two-way ANOVA revealed a significant main effect of task ( F(1,20) = 60.04, p < 0.001) in that participants showed greater error rates for anti-saccades (24.07%) than for pro-saccades (3.08%). Importantly, there was an interaction between stimulus type and task ( F(2,40) = 4.08, p = 0.02). Pairwise comparisons showed that in the anti-saccade task, participants performed worse for faces compared to cars or scrambled patterns ( Figure 2B): participants made significantly more errors to faces (26%) compared to cars (23%, F(1,20) = 18.6, p < 0.001) and scrambled patterns (23%, F(1,20) = 7.4, p = 0.01). There was no difference between the error rates of pro-saccades in relation to the three stimulus types. 
Figure 2
 
Results of the error rates, saccadic reaction time, fixation duration, and saccadic amplitude obtained for (left) pro-saccades and (right) anti-saccades as a function of stimulus type (faces, cars, and noise scrambled patterns). Bars indicate within-participants SEM error bars. Asterisks represent the least significant p-values of the tests performed (*** p < 0.001).
Figure 2
 
Results of the error rates, saccadic reaction time, fixation duration, and saccadic amplitude obtained for (left) pro-saccades and (right) anti-saccades as a function of stimulus type (faces, cars, and noise scrambled patterns). Bars indicate within-participants SEM error bars. Asterisks represent the least significant p-values of the tests performed (*** p < 0.001).
We also performed an item analysis on the anti-saccade error rate and carried out two ANOVAs with items and participants as random factors, assessing if specific stimuli or subjects might be driving this significant face-specific effect. 
The two-way ANOVA with items as the random factor revealed a main effect of stimulus type ( F(2,40) = 6.4, p = 0.004) showing greater errors for faces compared to cars and noise, but importantly no main effect of items ( F(11,220) = 1.7, p = 0.07) nor a significant interaction between stimulus type and items ( F(22,440) = 1.1, p = 0.28, ns). From this analysis, it is now clear that the anti-saccade error rate increase found for faces in comparison to cars and noise stimuli, even if small (<5%), was not driven by one or several particular items in the stimuli set. Nor can it be attributed to the image manipulation we used to control for the low-level visual properties. 
The two-way ANOVA with participants as the random factor revealed a significant main effect of stimulus type ( F(2,22) = 4.5, p = 0.02) showing greater errors for faces compared to cars and noise, as well as a significant main effect of subjects ( F(20,220) = 27.1, p < 0.001). No significant interaction between stimulus type and subjects was found. The significant main effect of subjects was unsurprising as the anti-saccade error rate was highly variable over the 21 subjects. To further clarify whether the error rate differed consistently across the visual categories, we carried out separate two-tailed paired t-tests on the three stimulus types across subjects. Subjects showed significantly larger errors for faces compared to cars ( p = 0.002) and faces compared to noise patterns ( p = 0.004), but they did not show any significant difference between cars and noise patterns ( p = 0.69, ns). Given these results, car and noise pattern data were collapsed to calculate a measure of “face effect” across subjects, defined as the error rate for faces minus the average of the error rate for cars and noise. A great majority of subjects (17/21) presented the face-specific effect, that is, a higher error rate for faces compared to noise patterns and cars (positive values) while only 4 subjects presented the reverse effect (negative values; see Figure 3). 
Figure 3
 
Error rate bias across subjects: a face-specific measure calculated as the error rate for faces minus the average of the error rate for cars and noise (non-face objects) is represented individually for the 21 subjects. Seventeen subjects showed a face-specific effect (positive values), while only 4 subjects showed the reverse effect (negative values).
Figure 3
 
Error rate bias across subjects: a face-specific measure calculated as the error rate for faces minus the average of the error rate for cars and noise (non-face objects) is represented individually for the 21 subjects. Seventeen subjects showed a face-specific effect (positive values), while only 4 subjects showed the reverse effect (negative values).
Saccadic reaction times
We analyzed the saccadic reaction times for the correct pro- and anti-saccades ( Figures 2C and 2D) but also for the incorrect pro-saccades in a 3 × 3 repeated measures ANOVA. There was a significant main task effect of task ( F(2,40) = 63.04, p < 0.001): as expected, anti-saccades were on average 41 ms slower than pro-saccades. Incorrect pro-saccades were faster than both anti-saccades (65 ms) and correct pro-saccades (24 ms). More interestingly, we again found an interaction between stimulus type and task ( F(4,80) = 3.89, p = 0.006) Pairwise comparisons showed that correct pro-saccades to faces ( p < 0.001) and to cars ( p < 0.001) were significantly faster than to noise scrambled patterns (189 ms), yet no differences were found between faces and cars ( Figure 2C). These two high-level property stimuli elicited similar pro-saccadic reaction times (both 183 ms). No effects were found for the anti-saccades and incorrect pro-saccades. 
Fixation duration after the first saccade
Analysis of the fixation duration after the first saccade, that is the time spent by the participant on the image, before going back to fixation or performing other saccades in exploring the image, revealed an interesting effect on pro-saccades only ( F(2,40) = 36.4, p < 0.001) as illustrated in Figure 2E. The fixation duration was much shorter for faces (334 ms) compared to cars (486 ms, p < 0.001) and scrambled patterns (480 ms, p < 0.001) yet there was no difference between cars and scrambled patterns. A two-way ANOVA with items as the random factor revealed a main effect of stimulus type ( F(2,40) = 35.51, p < 0.001), but no main effect of items ( F(11,220) = 0.7, p = 0.7) nor a significant interaction between stimulus type and items ( F(22,440) = 1.3, p = 0.7, ns). Therefore, the shorter fixation duration found for pro-saccades to faces in comparison to cars and noise stimuli was not driven by one or several particular items in the stimuli set. Nor can it be attributed to the image manipulation we used to control for the low-level visual properties. 
A two-way ANOVA with participants as the random factor revealed a significant interaction between stimulus type and subjects ( F(40,440) = 8.5, p < 0.001), with significant main effects of stimulus type ( F(2,22) = 223.1.5, p < 0.001) and of subjects ( F(20,220) = 60.4, p < 0.001). Again, the significant main effect of subjects was unsurprising as the fixation duration was highly variable over the 21 subjects. We used a similar approach as for the anti-saccade error rates described above (refer to “error rates” in the Results section) to clarify whether the fixation duration differed consistently across the visual categories. Two separate two-tailed paired t-tests were carried out on the three stimulus types across subjects. Subjects showed significantly shorter fixation duration for pro-saccades to faces compared to cars ( p < 0.001) and faces compared to noise patterns ( p < 0.001), but they did not show any significant difference between cars and noise patterns ( p = 0.51, ns). Car and noise pattern data were then collapsed to calculate a measure of “face effect” across subjects, defined as the fixation duration for faces minus the average of the fixation duration for cars and noise. All subjects except one presented the face-specific effect, that is shorter fixation duration for faces compared to noise patterns and cars (see Figure 4). 
Figure 4
 
Fixation duration bias across subjects: a face-specific measure calculated as the fixation duration for faces minus the average of the fixation duration for cars and noise (non-face objects) is represented individually for the 21 subjects. Twenty subjects showed a face-specific effect (negative values), while only 1 subject showed a marginal reverse effect (positive values).
Figure 4
 
Fixation duration bias across subjects: a face-specific measure calculated as the fixation duration for faces minus the average of the fixation duration for cars and noise (non-face objects) is represented individually for the 21 subjects. Twenty subjects showed a face-specific effect (negative values), while only 1 subject showed a marginal reverse effect (positive values).
To investigate whether this face-specific effect might be related to any learning, repetition, or familiarity effect of the faces, we analyzed the fixation duration for the pro-saccades in the first and last runs. The results confirmed that there were no differences between the runs. Thus, familiarity and learning cannot account for the fact that faces were processed much faster compared to cars and scrambled patterns. Note that no such fixation duration effects were found after the anti-saccades ( Figure 2F), nor after the incorrect pro-saccades. 
Saccade amplitudes
The two-way ANOVA performed on the saccade amplitudes of both pro- ( Figure 2G) and anti-saccades ( Figure 2H) for the three stimulus types revealed a main effect of stimulus ( F(2,40) = 5.58, p = 0.007) as well as task ( F(1,20) = 6.12, p = 0.022). Amplitudes of the anti-saccades were higher than those of the pro-saccades. Faces and cars led to higher amplitudes in comparison to the noise scrambled patterns. There was, however, no significant interaction between stimulus type and task. 
Discussion
Our aim was to investigate whether preferential processing of faces relies more on an automatic (involuntary) or a voluntary response, excluding the possibility that low-level visual properties might be driving these effects. We addressed this question by using an anti-saccade paradigm in which the high-level visual properties of the stimuli (i.e., phase) were manipulated while keeping their global low-level visual features constant (i.e., luminance, amplitude spectra, and contrast). 
The significant increase in anti-saccade error rates found for faces but not for cars and noise patterns indicates that human faces elicit stronger involuntary, stimulus-driven orienting responses than other visual objects. Moreover, this automatic processing cannot be attributed to difference in the low-level visual properties across visual categories, as all stimuli (faces, cars, and noise patterns) were normalized and visually homogenous. We thus claim that the high-level visual features defining face shapes trigger a significantly greater proportion of saccades beyond the control of the observer than other high-level stimuli. 
To the best of our knowledge, there has been just one previous study assessing how the visual properties of a stimulus affect the error rates in an anti-saccade task. Gilchrist and Proske (2006) reported higher error rates for upright compared to inverted faces, suggesting that faces and inverted faces are processed differently, in line with previous studies (Haxby et al., 1999; Perrett et al., 1988; Valentine, 1988 but see also Butler & Harvey, 2005). Since faces and inverted faces share the same low-level visual properties and differ only by their high-level properties, Gilchrist and Proske (2006) argued that this difference impacts on the saccadic system. Importantly, the higher anti-saccade error rates for faces over cars and noise patterns reported here demonstrate that not all high-level stimuli influence saccade programming in the same way as faces do. The results further illustrate that eye movement paradigms are a powerful tool to investigate face processing (see also work by Thorpe et al. cited below). 
We also found that saccades toward faces were faster than those toward noise patterns, yet did not differ in their latencies compared to cars. Subjects were therefore more rapid to saccade to high-level, familiar stimuli than to unrecognizable stimuli, whatever the object category. This result is somewhat puzzling given the specificity of the faces to capture attention as described above. Moreover, a fast bias toward faces has been reported recently (Crouzet, Thorpe, & Kirchner, 2007; Honey et al., 2008). In a saccadic choice paradigm, Crouzet et al. (2007) showed that early selective saccades could be directed toward faces but not toward other categories (animals and means of transport). Yet, in the present study, faces and cars did not differ in saccadic latencies. Our divergent result might be explained by the use of different contexts (complex visual scenes in their study) and by a floor effect driven by our particular paradigm (mixing pro- and anti-saccades in a block and having a fixation offset). Maybe pro-saccades could simply not be generated any faster, thus unlike the error rate, failing to reveal a differential effect. Indeed even the incorrect pro-saccades were not much faster (24 ms on average). The duration of the fixation after the first saccade was also significantly shorter for faces compared to both cars and noise patterns. This observation further suggests a faster, possibly more efficient, processing for faces (although it has to be granted that, since there was no specific task for the subjects to perform on these images, this observation is open to other interpretation). 
What is the visual information content in the face that makes it so favorable to the human visual system? Recent neuroimaging findings have shown that brain regions dedicated to face processing are tuned to process intrinsic visual regularities present in human faces (i.e., a top-heavy vertical bias constituted by the eyes, eyebrows, hairs toward the mouth). Neurons in the right middle fusiform gyrus respond to non-face curvilinear shapes containing more high-contrast elements in the upper compared to the lower part (Caldara & Seghier, 2009; Caldara et al., 2006), indicating that such low-level global properties might be used by those neurons to automatically categorize visual shapes as human faces. 
At the behavioral level, in a recent study by Honey et al. (2008), subjects were asked to saccade to the image containing the greater contrast in image pairs. The authors further manipulated the low-level visual content by scrambling either the local orientation or the position (i.e., the phase) of the spatial frequency components of the images. These manipulations had different impacts on the face bias: while disrupting the orientation content of a scene abolished the bias toward faces, the disruption of the phase of Fourier components (but not their orientation) did not. This suggests that the fast face bias depends in parts on local low-level information, in particular the 2-D amplitude spectrum across orientations. In the current study, we addressed the question differently. We kept the global low-level features between object categories constant to see whether high-level properties of the stimuli could account for the face bias. We used the same phase scrambling procedure as described in Honey et al. (2008) but matched each image to the mean amplitude spectrum of all images before scrambling and keeping the RMS contrast constant. Thus, faces, cars, and noise patterns differed only in phase content at the global level. Our results clearly show that the automatic attentional capture for faces is driven by high-level visual properties (see also Hershler & Hochstein, 2005, 2006). Future studies are necessary to clarify the extent to which each of the local low-level information intrinsic to faces (e.g., the high contrast present in the eye region) contributes to the nature of the effect we report. 
What neural circuits could control the involuntary orienting bias toward faces? The generation and/or suppression of saccadic eye movements involves several frontal areas such as the frontal eye fields (FEF), the supplementary eye field (SEF) and the dorsolateral prefrontal cortex (DLPFC), the lateral intraparietal area (LIP) in the posterior parietal cortex as well as the superior colliculus (SC), a subcortical structure. The DLPFC plays a crucial role in suppressing the automatic, reflexive responses and the FEF in executing voluntary saccades: patients with focal lesion in the DLPFC, but not those with lesions in the FEF, show specific increase in error rates in the anti-saccade paradigms, whereas lesions in the FEF are associated with increased latency of correct anti-saccades (Pierrot-Deseilligny et al., 2003; Pierrot-Deseilligny, Rivaud, Gaymard, & Agid, 1991). It is also well known that complex high-level visual processing, including the processing of faces, involves the inferior temporal lobe of the ventral visual pathway (e.g., Grill-Spector, 2003; Haxby et al., 1999; Ishai, Ungerleider, Martin, Schouten, & Haxby, 1999). Current results suggest an interaction between these two systems, which could be through the FEF and LIP as both these structures receive substantial projections from different areas from the ventral visual stream (Schall, Morel, King, & Bullier, 1995, summarized in Kirchner & Thorpe, 2006). The fast saccadic latencies found for faces (and cars) in comparison to noise stimuli indicate that an interaction between the high-level complex visual processing in the temporal lobe and the saccade programming of the eye movements occurs rapidly and most probably at an early stage of processing. 
Conclusions
The significant increase in anti-saccade error rates found for faces but not for other visual categories indicates that human faces generate stronger involuntary, stimulus-driven orienting responses, in line with an automatic capture of attention by faces. Moreover, this automatic processing cannot be attributed to global low-level visual properties as all stimuli (faces, cars, and noise patterns) were normalized for this factor. 
Acknowledgments
This work has been supported by a grant from the Economic and Social Research Council and the Medical Research Council (RES-060-25-0010) awarded to all authors. 
Commercial relationships: none. 
Corresponding author: Stephanie Morand. 
Email: s.morand@psy.gla.ac.uk. 
Address: Centre for Cognitive Neuroimaging (CCNi), Department of Psychology, University of Glasgow, 58, Hillhead Street, G128QB, Glasgow, UK. 
References
Bindemann M. Burton A. M. Jenkins R. (2005). Capacity limits for face processing. Cognition, 98, 177–197. [PubMed] [CrossRef] [PubMed]
Bindemann M. Burton A. M. Langton S. R. Schweinberger S. R. Doherty M. J. (2007). The control of attention to faces. Journal of Vision, 7, (10):15, 1–8, http://journalofvision.org/7/10/15/, doi:10.1167/7.10.15. [PubMed] [Article] [CrossRef] [PubMed]
Brown V. Huey D. Findlay J. M. (1997). Face detection in peripheral vision: Do faces pop out? Perception, 26, 1555–1570. [PubMed] [CrossRef] [PubMed]
Butler S. H. Harvey M. (2005). Does inversion abolish the left chimeric face processing advantage? Neuroreport, 16, 1991–1993. [PubMed] [CrossRef] [PubMed]
Caldara R. Seghier M. L. (2009). The fusiform face area responds automatically to statistical regularities optimal for face categorization. Human Brain Mapping, 30, 1615–1625. [PubMed] [CrossRef] [PubMed]
Caldara R. Seghier M. L. Rossion B. Lazeyras F. Michel C. Hauert C. A. (2006). The fusiform face area is tuned for curvilinear patterns with more high-contrasted elements in the upper part. Neuroimage, 31, 313–319. [PubMed] [CrossRef] [PubMed]
Connolly J. D. Goodale M. A. Desouza J. F. Menon R. S. Villis T. (2000). A comparison of frontoparietal fMRI activation during anti-saccades and anti-pointing. Journal of Neurophysiology, 84, 1645–1655. [PubMed] [Article] [PubMed]
Crouzet S. Thorpe S. J. Kirchner H. (2007). Category-dependent variations in visual processing time [Abstract]. Journal of Vision, 7, (9):922, 922a, http://journalofvision.org/7/9/922/, doi:10.1167/7.9.922. [CrossRef]
Everling S. Fisher B. (1998). The antisaccade: A review of basic research and clinical studies. Neuropsychologia, 36, 885–899. [PubMed] [CrossRef] [PubMed]
Gilchrist I. D. Proske H. (2006). Anti-saccades away from faces: Evidence for an influence of high-level visual processes on saccade programming. Experimental Brain Research, 173, 708–712. [PubMed] [CrossRef] [PubMed]
Grill-Spector K. (2003). The neural basis of object perception. Current Opinion in Neurobiology, 13, 159–166. [PubMed] [CrossRef] [PubMed]
Hallett P. E. (1978). Primary and secondary saccades to goals defined by instructions. Vision Research, 18, 1279–1296. [PubMed] [CrossRef] [PubMed]
Haxby J. V. Ungerleider L. G. Schouten J. L. Hoffman E. A. Martin A. (1999). The effect of face inversion on activity in human neural systems for face and object perception. Neuron, 22, 189–199. [PubMed] [CrossRef] [PubMed]
Hershler O. Hochstein S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45, 1707–1724. [PubMed] [CrossRef] [PubMed]
Hershler O. Hochstein S. (2006). With a careful look: Still no low-level confound to face pop-out. Vision Research, 46, 3028–3035. [PubMed] [CrossRef] [PubMed]
Honey C. Kirchner H. VanRullen R. (2008). Faces in the cloud: Fourier power spectrum biases ultrarapid face detection. Journal of Vision, 8, (12):9, 1–13, http://journalofvision.org/8/12/9/, doi:10.1167/8.12.9. [PubMed] [Article] [CrossRef] [PubMed]
Ishai A. Ungerleider L. G. Martin A. Schouten J. L. Haxby J. V. (1999). Distributed representation of objects in the human ventral visual pathway. Proceedings of the National Academy of Sciences of the United States of America, 96, 9379–9384. [PubMed] [Article] [CrossRef] [PubMed]
Johnson M. H. Dziurawiec S. Ellis H. D. Morton J. (1991). Newborns' preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40, 1–19. [PubMed] [CrossRef] [PubMed]
Kirchner H. Thorpe S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46, 1762–1776. [PubMed] [CrossRef] [PubMed]
Kuehn S. M. Jolicoeur P. (1994). Impact of quality of the image, orientation, and similarity of the stimuli on visual search for faces. Perception, 23, 95–122. [PubMed] [CrossRef] [PubMed]
Michel C. Rossion B. Han J. Chung C. S. Caldara R. (2006). Holistic processing is finely tuned for faces of one's own race. Psychological Science, 17, 608–615. [PubMed] [CrossRef] [PubMed]
Munoz D. P. Everling S. (2004). Look away: The anti-saccade task and the voluntary control of eye movement. Nature Neuroscience Reviews, 5, 218–228. [PubMed] [CrossRef]
Nothdurft H. C. (1993). Faces and facial expressions do not pop out. Perception, 22, 1287–1298. [PubMed] [CrossRef] [PubMed]
Oldfield R. C. (1971). The assessment and analysis of Handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. [PubMed] [CrossRef] [PubMed]
Oppenheim A. V. Lim J. S. (1981). The importance of phase in signals. Proceedings of the IEEE, 69, 529–541. [CrossRef]
Palermo R. C. Rhodes G. (2003). Change detection if the flicker paradigm: Do faces have an advantage? Visual Cognition, 10, 683–713. [CrossRef]
Palermo R. Rhodes G. (2007). Are you always on my mind A review of how face perception and attention interact. Neuropsychologia, 45, 75–92. [PubMed] [CrossRef] [PubMed]
Perrett D. I. Mistlin A. J. Chitty A. J. Smith P. A. J. Potter D. D. Broennimann R. (1988). Specialized face processing and hemispheric asymmetry in man and monkey: Evidence from single unit and reaction time studies. Behavioural Brain Research, 29, 245–258. [PubMed] [CrossRef] [PubMed]
Pierrot-Deseilligny C. Muri R. M. Ploner C. J. Gaymard B. Demeret S. Rivaud-Pechoux S. (2003). Decisional role of the dorsolateral prefrontal cortex in ocular motor behaviour. Brain, 126, 1460–1473. [PubMed] [Article] [CrossRef] [PubMed]
Pierrot-Deseilligny C. Rivaud S. Gaymard B. Agid Y. (1991). Cortical control of reflexive visually guided saccades. Brain, 114, 1473–1485. [PubMed] [CrossRef] [PubMed]
Purcell D. G. Stewart A. L. Skov R. B. (1996). It takes a confounded face to pop out of a crowd. Perception, 25, 1091–1108. [PubMed] [CrossRef] [PubMed]
Ro T. Russell C. Lavie N. (2001). Changing faces: A detection advantage in the flicker paradigm. Psychological Science, 12, 94–99. [PubMed] [CrossRef] [PubMed]
Rousselet G. A. Husk J. S. Bennett P. J. Sekuler A. B. (2008). Time course and robustness of ERP object and face differences. Journal of Vision, 8, (12):3, 1–18, http://journalofvision.org/8/12/3/, doi:10.1167/8.12.3. [PubMed] [Article] [CrossRef] [PubMed]
Schall J. D. Morel A. King D. J. Bullier J. (1995). Topography of visual cortex connections with frontal eye field in macaque: Convergence and segregation of processing streams. Journal of Neuroscience, 15, 4464–4487. [PubMed] [Article] [PubMed]
Schweinberger S. R. Kaufmann J. M. Moratti S. Keil A. Burton A. M. (2007). Brain responses to repetitions of human and animal faces, inverted faces, and objects: An MEG study. Brain Research, 1184, 226–233. [PubMed] [CrossRef] [PubMed]
Sekuler A. B. Bennett P. J. (1996). Spatial phase differences can drive apparent motion. Perception & Psychophysics, 58, 174–190. [PubMed] [CrossRef] [PubMed]
Simion F. Leo I. Turati C. Valenza E. Dalla Barba B. (2007). How face specialization emerges in the first months of life. Progress in Brain Research, 164, 169–185. [PubMed] [PubMed]
Valentine T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [PubMed] [CrossRef] [PubMed]
VanRullen R. (2006). On second glance: Still no high-level pop-out effect for faces. Vision Research, 46, 3017–3027. [PubMed] [CrossRef] [PubMed]
Vizioli L. Foreman K. Rousselet G. A. Caldara R. (2010). Inverting faces elicits sensitivity to race on the N170 component: A cross-cultural study. Journal of Vision, 10, (1):15, 1–23, http://journalofvision.org/10/1/15/, doi:10.1167/10.1.15. [PubMed] [Article] [CrossRef] [PubMed]
Vuilleumier P. (2000). Faces call for attention: Evidence from patients with visual extinction. Neuropsychologia, 38, 693–700. [PubMed] [CrossRef] [PubMed]
Morton J. Johnson M. H. (1991). CONSPEC and CONLERN: A two-process theory of infant face recognition. Psychological Review, 98, 164–181. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Schematic representation of the anti-saccade task. (A) Examples of the images used. See Methods section for image processing details. (B) Within a block, subjects were asked to generate either a pro-saccade (PS) or an anti-saccade (AS) depending on the color of the cue: a green dot indicated that the participant had to perform a saccade toward the stimulus (PS) while a red dot instructed the participant to generate a saccade in the opposite direction (AS). Stimuli were faces, cars, and noise phase scrambled patterns as shown in (A) and were presented in random order.
Figure 1
 
Schematic representation of the anti-saccade task. (A) Examples of the images used. See Methods section for image processing details. (B) Within a block, subjects were asked to generate either a pro-saccade (PS) or an anti-saccade (AS) depending on the color of the cue: a green dot indicated that the participant had to perform a saccade toward the stimulus (PS) while a red dot instructed the participant to generate a saccade in the opposite direction (AS). Stimuli were faces, cars, and noise phase scrambled patterns as shown in (A) and were presented in random order.
Figure 2
 
Results of the error rates, saccadic reaction time, fixation duration, and saccadic amplitude obtained for (left) pro-saccades and (right) anti-saccades as a function of stimulus type (faces, cars, and noise scrambled patterns). Bars indicate within-participants SEM error bars. Asterisks represent the least significant p-values of the tests performed (*** p < 0.001).
Figure 2
 
Results of the error rates, saccadic reaction time, fixation duration, and saccadic amplitude obtained for (left) pro-saccades and (right) anti-saccades as a function of stimulus type (faces, cars, and noise scrambled patterns). Bars indicate within-participants SEM error bars. Asterisks represent the least significant p-values of the tests performed (*** p < 0.001).
Figure 3
 
Error rate bias across subjects: a face-specific measure calculated as the error rate for faces minus the average of the error rate for cars and noise (non-face objects) is represented individually for the 21 subjects. Seventeen subjects showed a face-specific effect (positive values), while only 4 subjects showed the reverse effect (negative values).
Figure 3
 
Error rate bias across subjects: a face-specific measure calculated as the error rate for faces minus the average of the error rate for cars and noise (non-face objects) is represented individually for the 21 subjects. Seventeen subjects showed a face-specific effect (positive values), while only 4 subjects showed the reverse effect (negative values).
Figure 4
 
Fixation duration bias across subjects: a face-specific measure calculated as the fixation duration for faces minus the average of the fixation duration for cars and noise (non-face objects) is represented individually for the 21 subjects. Twenty subjects showed a face-specific effect (negative values), while only 1 subject showed a marginal reverse effect (positive values).
Figure 4
 
Fixation duration bias across subjects: a face-specific measure calculated as the fixation duration for faces minus the average of the fixation duration for cars and noise (non-face objects) is represented individually for the 21 subjects. Twenty subjects showed a face-specific effect (negative values), while only 1 subject showed a marginal reverse effect (positive values).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×