Open Access
Article  |   July 2016
Finding faces, animals, and vehicles in far peripheral vision
Author Affiliations
Journal of Vision July 2016, Vol.16, 10. doi:https://doi.org/10.1167/16.2.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Muriel Boucart, Quentin Lenoble, Justine Quettelart, Sebastien Szaffarczyk, Pascal Despretz, Simon J. Thorpe; Finding faces, animals, and vehicles in far peripheral vision. Journal of Vision 2016;16(2):10. https://doi.org/10.1167/16.2.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neuroimaging studies have shown that faces exhibit a central visual field bias, as compared to buildings and scenes. With a saccadic choice task, Crouzet, Kirchner, and Thorpe (2010) demonstrated a speed advantage for the detection of faces with stimuli located 8° from fixation. We used the same paradigm to examine whether the face advantage, relative to other categories (animals and vehicles), extends across the whole visual field (from 10° to 80° eccentricity) or whether it is limited to the central visual field. Pairs of photographs of natural scenes (a target and a distractor) were displayed simultaneously left and right of central fixation for 1s on a panoramic screen. Participants were asked to saccade to a target stimulus (faces, animals, or vehicles). The distractors were images corresponding to the two other categories. Eye movements were recorded with a head-mounted eye tracker. Only the first saccade was measured. Experiment 1 showed that (a) in terms of speed of categorization, faces maintain their advantage over animals and vehicles across the whole visual field, up to 80° and (b) even in crowded conditions (an object embedded in a scene), performance was above chance for the three categories of stimuli at 80° eccentricity. Experiment 2 showed that, when compared to another category with a high degree of within category structural similarity (cars), faces keep their advantage at all eccentricities. These results suggest that the bias for faces is not limited to the central visual field, at least in a categorization task.

Introduction
A large body of research has been devoted to the study of objects, faces, words, and scene recognition in the central visual field due to its high spatial resolution. In contrast, few studies have investigated the perception of stimuli in the peripheral visual field, particularly beyond 20° eccentricity. This is unfortunate because the detection of relevant stimuli (e.g., a pedestrian crossing, a moving car, a fearful animal, a facial expression) at peripheral locations is important in everyday life, and can be critical for survival. Indeed, despite its low spatial resolution, peripheral vision provides critical information about the environment. For instance, it has been shown that the relatively coarse information provided by peripheral preview can benefit object identification (Henderson & Anes, 1994), improve reading speed (Rayner, Slattery, Drieghe, & Liversedge, 2011) and help guide eye movements to targets in visual search tasks (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012). In people forced to rely on peripheral vision because of a central scotoma (e.g., in macular degeneration), implicit processing of contextual information (the background) in peripheral vision facilitates central object categorization (Boucart, Moroni, Szaffarczyk, & Tran, 2013a). Previous studies of peripheral vision at very large eccentricities (beyond 50°) have shown that normally sighted young observers are above chance at detecting animals in photographs of natural scenes at 75° eccentricity (Thorpe, Gegenfurtner, Fabre-Thorpe, & Bulthoff, 2001). They can also categorize photographs of scenes at both superordinate (e.g., natural/urban) and basic levels (e.g., forest, mountain, highways) well above chance (70% correct in a forced-choice task) at 70° eccentricity (Boucart, Moroni, Thibault, Szaffarczyk, & Greene, 2013b). Such data indicate that perception at large eccentricities remains robust for categories of stimuli and for tasks that can be accomplished on the basis of coarse-scale information. These may include the implicit, though not explicit, recognition of isolated objects (Boucart, Naili, Despretz, Defoort, & Fabre-Thorpe, 2010), the detection of animals within a scene (Thorpe et al., 2001), the categorization of scenes (Boucart et al., 2013b; Larson & Loschky, 2009; Loschky et al., 2015) and the detection of some facial expressions including fear in humans (Bayle, Schoendorff, Hénaff, & Krolak-Salmon, 2011) and threats in monkeys (Landman, Sharma, Sur, & Desimone, 2014). Other behavioral studies have demonstrated declining performance in the peripheral visual field, but much of this reduction may simply depend on the contrast and size of the stimuli (Mäkelä, Nasanen, Rovamo, & Melmoth, 2001) and on the spatial scale required by the task (e.g., detection versus identification, Jebara, Pins, Despretz, & Boucart, 2009; McKone, 2004). 
There is ongoing debate regarding the question of whether faces have a central visual field bias. Evidence from neuroimaging studies in humans indicates that faces preferentially activate visual cortical areas corresponding to the central visual field whereas photographs of scenes or buildings produce stronger activations in regions representing the periphery (Hasson, Harel, Levy, & Malach, 2003; Kanwisher, 2001; Levy, Hasson, Avidan, Hendler, & Malach, 2001; Wang et al., 2013). For instance, in monkeys, Verhoef, Bohon, and Conway (2015) used stimuli (faces and places) that comprised a mixture of near and far disparities to identify regions that responded more to near or far stimuli. They found that regions of the brain involved in scene perception responded preferentially to both near and far stimuli, compared with stimuli without disparity, and showed a peripheral visual field bias. Faces showed a marked preference for “near” stimuli and showed a bias towards the central visual field. In contrast, Rousselet, Husk, Bennett, and Sekuler (2005) reported that the central visual field bias for faces and the peripheral field bias for houses can be eliminated when amplitude spectra are equated across stimuli and when the size of faces and houses is rescaled according to V1 cortical magnification. Finally, studies in animals show that the selectivity of infero-temporal neurons to stimuli such as faces is largely independent of the stimulus location within their large receptive fields (Ito, Tamura, Fujita, & Tanaka, 1995; Logothetis, Pauls, & Poggio, 1995; Tovee, Rolls, & Azzopardi, 1994). Overall, these studies suggest that there might be an advantage for faces in the center of the visual field but that it might depend on whether the task requires high or low spatial resolution. 
Face recognition and face discrimination are both impaired in peripheral vision (Mäkelä et al., 2001). This may result from multiple factors including a reduced efficiency in the use of contrast information (Strasburger, Rentschler, & Harvey, 1994), increased sensitivity to crowding (Martelli, Majaj, & Pelli, 2005), and the fact that low spatial frequency information dominates face processing in the periphery (Awasthi, Friedman, & Williams, 2011). However, it has also been demonstrated that faces are more easily detected than other objects. For instance, Hershler, Golan, Bentin, and Hochstein (2010) reported that human faces were detected more successfully than other categories of objects and that this benefit increases with eccentricity (from 4° to 20° in their study). The authors suggested that, in peripheral vision, the face advantage could result from some preattentive initial distinction of faces from other stimuli. Other evidence suggesting that faces may have a special advantage over other object categories prior to fixation comes from a study by Crouzet et al. (2010). They used a saccadic choice task in normally sighted young participants to measure the speed of object categorization. Participants were presented with two lateral (left/right) photographs of natural scenes displayed for 400 ms at 8.6° eccentricity, and asked to move their eyes as quickly as possible to the scene containing a prespecified target. They found that the fastest saccades to animal targets were triggered as early as 120–130 ms after stimulus onset. However, saccades toward human faces were even faster, with the earliest reliable saccades occurring just 100–110 ms after stimulus onset. 
In the present study we used a saccadic forced-choice paradigm to explore the speed of processing of different categories of stimuli as a function of eccentricity in Experiment 1. Specifically, we investigated whether human faces keep their advantage over other categories (animals and vehicles) across the whole visual field (from 10° to 80° eccentricity) or whether the advantage is limited to the central visual field (the macular region), as suggested by neuroimaging studies. In Experiment 2 we assessed whether the face advantage results from its high within-category structural similarity, allowing detection on the basis of coarse-scale information (e.g., roundness), by comparing performance for faces with another category that has a high degree of within-category structural similarity (cars). 
General method
Unless otherwise mentioned, the same methods were used for all experiments. 
Stimuli
The stimuli consisted of 275 colored photographs of human faces from various ethnic groups, 216 colored photographs of sedan-like cars, 211 colored photographs of complete animals (mammals, birds, reptiles, fish, amphibia, insects), and 183 photographs of various vehicles (cars, motocycles, bikes, trucks, buses, boats, planes) selected from a large photolibrary database (Corel) or downloaded from the Internet (for cars). Examples of stimuli are presented in Figure 1. For all photographs the resolution was 512 × 512 pixels covering 18 × 18° of visual angle at a viewing distance of 2.04 m. For all categories the scenes contained small (covering less than one third of the scene), medium (covering half of the scene), and large (covering more than half of the scene) objects in equivalent proportions. As contrast sensitivity rapidly decreases at large eccentricities, especially for high spatial frequencies (Cannon, 1985), the images were presented at full contrast. We did not rescale the images to compensate for acuity loss in the periphery as the images would become too large for the screen at very large eccentricities if we used the equation proposed by Rovamo and Virsu (1979). 
Figure 1
 
Examples of colored faces, animals, and vehicles used in Experiment 1 (three top rows) and examples of colored and gray-scale cars, faces, and animals used in Experiment 2 (two bottom rows).
Figure 1
 
Examples of colored faces, animals, and vehicles used in Experiment 1 (three top rows) and examples of colored and gray-scale cars, faces, and animals used in Experiment 2 (two bottom rows).
Apparatus
The stimuli were displayed by means of three projectors (Optoma HD83) fixed on the ceiling, and connected to a PC computer (Dell). Participants were seated 2.04 meters from a hemispheric rigid light gray (68 cd/m2) screen covering 90° eccentricity on each side of the central fixation (see Movie 1). The presentation software (Vision 180) was written by the laboratory engineer in C#. Eye movements were recorded by means of the iViewXTM HED eye tracker from SensoMotoric Instruments (Teltow, Germany) with a scene camera. The video based eye tracker is head-mounted, and uses infrared reflection to provide an eye-in-head signal at a sampling rate of 50 Hz and accuracy about 1°. The scene camera mounted on the head was positioned so that its field of view was coincident with the observer's line of sight. Calibration was performed using a five-point grid. Calibration was performed two times in order to verify the stability. Only if the eye tracker classified the calibration as “good” (a green light) were the recording trials initiated. Following calibration, the eye tracker creates a cursor, indicating eye-in-head position that is merged with the video from the scene camera. The video records were analyzed using the software BeGaze from SensoMotoric Instruments (Teltow, Germany). We recorded the latency of the first saccade (from the onset of the photographs). As the scene camera covers 40° and the hemispheric screen covers 180°, we could only record the direction of the saccade (left/right). Before the experiment, participants were presented with a central white square (40° × 40°) containing five calibration points. The participant was asked to fixate the black dots (center, top right, top left, bottom right, bottom left) while his/her eye positions were recorded by the system. Once the calibration was completed, this was removed, and the participant started the saccadic-choice task. As the head-mounted eye tracker records eye movements on one eye, half of the participants in each group were recorded on the left eye and the other half on the right eye. 
 
Figure 2.
 
An example of a pair of stimuli (target face and distractor animal) displayed at 20° eccentricity on the panoramic screen.
Experiment 1
Method
Subjects
Forty-eight participants (28 females), ranging in age from 19 to 30 (mean: 23.4) took part in the experiment. Participants were students in psychology, medicine, or physiology. They were paid for their participation. All participants had normal or corrected-to-normal vision. Written consent was obtained from all participants. The study was approved from the Nord-Ouest IV ethics committee. 
Procedure
The sequence of a trial was as follows: A black fixation cross (5 × 5°) was displayed centrally for 1 s. It was followed by a gap (blank screen) lasting 200 ms. Following the gap, two photographs (a target and a distractor) were simultaneously presented left and right of fixation, at the same eccentricity, for 1 s. Two black crosses centered at the middle of the two photographs were then displayed for 200 ms. The intertrial interval was fixed at 1400 ms. One group of 16 participants was given human faces as targets (with various vehicles and animals as distractors), a second group of 16 participants was given animals as targets (with human faces and vehicles as distractors), and a third group of 16 participants was given vehicles as targets (with human faces and complete animals as distractors). There were 400 trials in each block determined by five eccentricities (10°, 20°, 40°, 60°, and 80°) × 2 spatial locations of the target (left/right of fixation) × 40 images randomly selected by software within the folders (e.g., within the folder of 275 photographs of human faces). Images were only presented once each. Eccentricities were based on the center of the photograph. The five eccentricities and the two spatial locations of the target were randomly and equally represented. 
Results
Anovas using the software Systat 8.0 were conducted on accuracy and on the latency of the first saccade (for correct saccades). One participant in the human face target group was discarded because accuracy was at chance level at all eccentricities. The within-subject factors were eccentricity (10°, 20°, 40°, 60°, and 80°) and the distractor (e.g., vehicles vs. animals for faces as targets). The between-subject factors were the eye recorded (left/right) and the target (face, animal, vehicle). Outliers were eliminated using the Tukey's outlier filter (Tukey, 1977). This method considers observation X to be an outlier if X < (Q1 – 1.5 IQR) or X > (Q3 + 1.5 IQR) where Q1 is the lower quartile, Q3 the upper quartile, and IQR = (Q3 – Q1) is the interquartile range. Accuracy is presented in Figure 3a for faces, Figure 3b for animals and Figure 3c for vehicles. Mean correct saccade latencies (Mean SRT) are presented in Figure 4a for faces, Figure 4b for animals, and Figure 4c for vehicles. To determine a value for the minimum saccadic response time (Min SRT), we divided the saccade latency distribution of each type of target and each eccentricity into 20-ms time bins and searched for bins containing significantly more correct than erroneous responses using a X2 test with a criterion of p < 0.05. If five consecutive bins reached this criterion, the first was considered to correspond to the minimum SRT. The Min SRT of each category of target is presented in Figure 5
Figure 3
 
Percentage of correct saccades (and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 3
 
Percentage of correct saccades (and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 4
 
Mean correct saccadic latencies (mean SRT and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 4
 
Mean correct saccadic latencies (mean SRT and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 5
 
Minimum saccadic response time (Min SRT) for each target (averaged over distractors) as a function of eccentricity in Experiment 1.
Figure 5
 
Minimum saccadic response time (Min SRT) for each target (averaged over distractors) as a function of eccentricity in Experiment 1.
Accuracy and mean SRT
Using the Tukey's method, 8.25% of the data was eliminated in the group with faces as targets, 7.26% in the group with animals as targets, and 5.4% with vehicles as targets. There was no main effect of the eye recorded. Consistent with Crouzet et al. (2010), data mean latencies for faces (273 ms) were much shorter than those for animals (348 ms) and vehicles (335 ms), F(2, 41) = 7.24, p < 0.002, Display FormulaImage not available = 0.683, which did not differ significantly, but there was no effect of the category of stimulus on accuracy (faces: 71.6%, animals: 69.2%, vehicles: 72.4%), F(2, 41) = 0.4, p = 0.66. As expected, latencies increased, F(4, 164) = 108.2, p < 0.001, Display FormulaImage not available = 0.898, and accuracy decreased, F(4, 164) = 54.05, p < 0.001, Display FormulaImage not available = 0.804, with the increase in eccentricity.  
No main effect of the type of distractor was observed, but distractor interacted significantly with target both for accuracy, F(2, 41) = 16.13, p < 0.001, Display FormulaImage not available = 0.661, and for saccade latency, F(2, 41) = 9.86, p < 0.001, Display FormulaImage not available = 0.417. Distractor type also interacted with eccentricity for accuracy, F(4, 164) = 11.6, p < 0.001, Display FormulaImage not available = 0.581, but not for latency, whereas target type interacted with eccentricity only for saccade latencies, F(8, 164) = 2.06, p < 0.042, Display FormulaImage not available = 0.304. A separate analysis was conducted for each target.  
When the targets were faces, saccade latencies were not affected by the category of distractors, F(1, 13) = 0.93 p = 0.45, but accuracy was higher with vehicles than with animals as distractors, F(1, 13) = 21.94, p < 0.001, Display FormulaImage not available = 0.843, suggesting that interference was stronger within natural categories (faces/animals) than between natural and man-made categories (faces/vehicles). There was no significant interaction between eccentricity and distractors. Accuracy was significantly above chance at 80° eccentricity, 58.6%, t(14) = 4.1, p < 0.001, Cohen's d = 0.741.  
As with faces for targets, the interference within natural categories was stronger than between natural and artifactual categories. Indeed, when the targets were animals, performance was lower with faces as distractors than with vehicles as distractors: for accuracy, F(1, 14) = 8.7, p < 0.010, Display FormulaImage not available = 0.511; for saccade latencies, F(1, 14) = 8.4, p < 0.011, Display FormulaImage not available = 0.604. The category of distractor interacted significantly with eccentricity for accuracy, F(4, 56) = 6.13, p < 0.001, Display FormulaImage not available = 0.343, but not for saccade latency, F(4, 56) = 0.8, p = 0.49. Faces as distractors interfered more than vehicles on target selection, but the face interference effect decreased with the increase in eccentricity. Accuracy was significantly above chance at 80° eccentricity, 56.8% t(15) = 2.8, p < 0.012, Cohen's d = 0.412.  
When the targets were vehicles, the category of distractor interacted significantly with eccentricity for accuracy, F(4, 56) = 5.8, p < 0.001, Display FormulaImage not available = 0.452, but not for saccade latencies, F(4, 56) = 1.39, p = 0.24. Accuracy was better when distractors were faces than when they were animals at larger eccentricities (20° and above) suggesting that the within-category structural similarity of faces increased their inhibition as distractors. Accuracy was significantly above chance at 80° eccentricity (62.5%), t(15) = 5.2, p < 0.001, Cohen's d = 0.583.  
Discussion
The latency of the first saccade was shorter for human faces as targets than for animals and vehicles as targets. This result replicates Crouzet et al. (2010) data but extends this finding by showing that shorter latencies for faces occur across the whole visual field (from 10° to 80°). The results of Experiment 1 also show that accuracy was above chance for the three categories of targets at 80° eccentricity. This result is in agreement with Boucart et al. (2013b) who reported a performance above chance at 70° eccentricity in a categorization task on scenes. However, in contrast to scene categorization, which can be based on global information (Greene & Oliva, 2009), we show that observers are able to categorize a local object (face, animal, vehicle) within a scene at very large eccentricities (see also Thorpe et al., 2001, for the categorization of an animal within a scene at 75° eccentricity) in spite of peripheral vision being very sensitive to crowding (Levi, 2008; Pelli, 2008; Strasburger, Rentschler, & Jüttner, 2011). It can be argued that the shorter saccade latencies for faces could be explained if the basic level category (faces) had a faster access to semantic representations than the two superordinate categories (vehicles and animals). Indeed, one prominent view on object recognition is that the basic level is accessed before the superordinate level of categorization (Palmeri & Gauthier, 2004; Richler, Gauthier, & Palmeri, 2011; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). However, this effect has mostly been reported in naming tasks and with relatively long exposure times. Studies using a rapid categorization task, as is the case here, have reported shorter response times for superordinate (e.g., animals) than for basic level (e.g., dogs, birds) levels of categorization (Mace, Joubert, Nespoulous, & Fabre-Thorpe, 2009; Poncet & Fabre-Thorpe, 2014; Wu, Crouzet, Thorpe, & Fabre-Thorpe, 2015). The present advantage for faces in terms of saccade latencies, thus, cannot be accounted for by the level of category. On average, we observed no significant difference in accuracy between the three categories of targets. Crouzet et al. (2010) reported a higher accuracy for faces than for animals and vehicles. However, as can be seen from Figure 3a through c, accuracy was better for faces and for animals than for vehicles at 10° eccentricity, a result which is consistent with Crouzet et al. (2010) who used an eccentricity of 8.6°. The type of distractor affected target selection in the three target categories. With faces as targets, animals were more detrimental than vehicles in the selection of the target suggesting a higher interference within biological categories than between biological and man-made categories. This may be because two biological categories share more physical features (e.g., the human's and the animal's facial features) than a biological and a man-made category. Indeed, the distributed representation model of brain organization proposes that the visual system is organized to extract generic visual features necessary for object recognition, and that objects are represented by combinations of these features (Haxby et al., 2001; Serre, Wolf, Bileschi, Riesenhuber, & Poggio, 2007; Tsunoda, Yamane, Nishizaki, & Tanifuji, 2001). With animals as targets, human faces as distractors interfered more than vehicles as distractors on target selection; but this interference decreased with the increase in eccentricity, suggesting that human faces capture attention automatically only when the spatial resolution allows the recognition of the object as a face. With vehicles as targets, accuracy was better when distractors were faces than when they were animals at 20° and above, suggesting that the side of the screen containing diagnostic face information (a round shape) was easier to eliminate as nontarget. 
The results of Experiment 1 showed a clear advantage for human faces as stimuli when the first saccade is taken into account. Experiment 2 was designed to better understand whether this advantage results from the structural homogeneity of faces (roundness) or from a preattentive initial distinction of faces from other stimuli due to their social relevance (Hershler et al., 2010). We compared performance for two categories having a high degree of structural similarity (faces and cars). Additionally, as bright colors in cars might capture attention, we compared performance for both colored and for gray-scale photographs. If the shorter saccade latencies for faces resulted from faces being structurally similar, and therefore easier to categorize in Experiment 1, then this advantage for faces should disappear when compared to another category with a high degree of structural similarity. 
Experiment 2
Method
Subjects
Forty-three new participants took part in the experiment. Twenty-three participants (10 males), ranging in age from 19 to 27 (mean: 22.9), were tested with colored photographs, and 20 participants (11 males), ranging in age from 19 to 30 (mean: 23.6) were tested with gray-scale photographs. They were all students in psychology, medicine, or physiology and paid for their participation. All participants had normal or corrected-to-normal vision. Written consent was obtained from all participants. 
Stimuli
The stimuli were exactly the same human faces and animals as those used in Experiment 1. For the achromatic version the colored photographs were converted into gray-scale images using the software XnView (free access). We also selected a set of 216 photographs of homogeneous sedan-like cars which were downloaded from the Internet and presented in colored and in gray-scale versions. Examples of stimuli are presented in Figure 1. The angular size of each scene was fixed at 18 × 18°. The stimuli (faces, animals, and cars) had various sizes (small, medium, large) within the scene. 
Procedure
Each participant was tested in two blocks of 400 trials each: one block with faces as target (cars and animals as distractors) and one block with cars as target (faces and animals as distractors) in the colored version for 23 participants and in the gray level version for 20 other participants. The order of the two blocks was counterbalanced between the participants in each version of photographs. The 400 trials of each block tested performance at five eccentricities (10°, 20°, 40°, 60°, and 80°) and two spatial locations of the target (left/right of fixation) using 40 images randomly selected by software from the folders. A recalibration procedure was performed after each block of trials. As in Experiment 1 half of the participants were recorded using the right eye and the other half with the left eye. 
Results
Data analyses
Anovas using the software Systat 8.0 were conducted on accuracy and on the latency of the first saccade (Mean SRT). The within subject factors were the target (cars, faces), eccentricity (10°, 20°, 40°, 60°, and 80°) and the distractor (e.g., cars vs. animals for faces as targets). The between-subject factor was the eye recorded (left/right) and the version of photographs (colored/gray levels). Outliers were eliminated using the Tukey's method. Accuracy is presented in Figure 6. Mean correct saccade latencies are presented in Figure 7. The Min SRT is presented in Figure 8 for each eccentricity. 
Figure 6
 
Percentage of correct saccades (and standard errors) for colored and gray-scale faces and cars used as targets as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 6
 
Percentage of correct saccades (and standard errors) for colored and gray-scale faces and cars used as targets as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 7
 
Mean correct saccade latencies (mean SRT and standard errors) for colored and gray-scale faces and cars used as target as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 7
 
Mean correct saccade latencies (mean SRT and standard errors) for colored and gray-scale faces and cars used as target as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 8
 
Minimum saccadic response time (Min SRT) for colored (C) and gray-scale (G) target faces and cars (averaged over distractors) as a function of eccentricity in Experiment 2.
Figure 8
 
Minimum saccadic response time (Min SRT) for colored (C) and gray-scale (G) target faces and cars (averaged over distractors) as a function of eccentricity in Experiment 2.
Accuracy and mean SRT
The percentage of data eliminated (outliers) was 5.81% for gray faces, 5.77% for gray cars, 9.05% for colored faces, and 5.33% for colored cars. There was no effect of the eye tested. In agreement with previous studies with objects as stimuli (Naili, Despretz, & Boucart, 2006) color facilitated categorization even at very large eccentricity: Colored pictures were categorized more accurately (by 6.9%), F(1, 39) = 6.17, p < 0.017, Display FormulaImage not available = 0.393, but not significantly faster (by 14 ms), F(1, 39) = 0.55, p = 0.46, than gray-scale pictures.  
As in Experiment 1, faces were categorized faster (by 18 ms), F(1, 39) = 23.4, p < 0.001, Display FormulaImage not available = 0.917, and more accurately (by 2.3%), F(1, 39) = 5.07, p < 0.03, Display FormulaImage not available = 0.435, than cars, indicating that human faces capture attention more than other categories of objects even when structural similarity is controlled. As expected, saccade latencies increased and accuracy decreased with increasing eccentricity, F(4, 156) = 153.2, p < 0.001, Display FormulaImage not available = 0.971 for latencies and F(4, 156) = 51.8, p < 0.001, Display FormulaImage not available = 0.819 for accuracy.  
There was a significant main effect of distractors on both saccade latency, F(1, 39) = 12.2, p < 0.001, Display FormulaImage not available = 0.604, and accuracy, F(1, 39) = 80, p < 0.001, Display FormulaImage not available = 0.796. The target interacted significantly with distractor type both for saccade latencies, F(1, 39) = 9.25, p < 0.004, Display FormulaImage not available = 0.541, and for accuracy, F(1, 39) = 9.09, p < 0.005, Display FormulaImage not available = 0.498; and target interacted with eccentricity, F(4, 156) = 2.51, p < 0.044, Display FormulaImage not available = 0.411 for saccade latencies, and F(4, 156) = 3.2, p < 0.015, Display FormulaImage not available = 0.387 for accuracy. No interaction involving the version of photographs (colored/gray-scale) reached statistical significance. A separate analysis was conducted on each target.  
As in Experiment 1, animals as distractors interfered more with the selection of target faces than cars as distractors for accuracy, F(1, 39) = 24, p < 0.001, Display FormulaImage not available = 0.832; for saccade latencies, F(1, 39) = 101.6, p < 0.001, Display FormulaImage not available = 0.951. Even though different ethnic groups were used, colored faces were categorized more accurately than gray-scale faces (83.2% vs. 74%), F(1, 39) = 10.4, p < 0.003, Display FormulaImage not available = 0.728. There was also a significant interaction between eccentricity and distractor only for latencies, F(4, 156) = 2.56, p < 0.04, Display FormulaImage not available = 0.417.  
Color had no effect on target selection when the targets were cars, F(1, 39) = 0.7, p = 0.4 for latencies and F(1, 39) = 1.63, p = 0.21 for accuracy, suggesting that there was no attentional capture by color. As in Experiment 1 the within-category structural similarity of faces increased their inhibition as distractors. Accuracy for the selection of cars as target was higher when distracters were faces (by 2.4%), F(1, 39) = 7.7, p < 0.008, Display FormulaImage not available = 0.621, than when distractors were animals. No difference was observed for latencies, F(1, 39) = 0.2, p = 0.6. No significant interaction was observed.  
Accuracy was significantly above chance for colored faces, t(22) = 9.4, p < 0.001, Cohen's d = 0.68; for colored cars, t(22) = 6.1, p < 0.007, Cohen's d = 0.459; for gray faces, t(19) = 6.8, p < 0.001, Cohen's d = 0.617; and for gray cars, t(19) = 5.03, p < 0.001, Cohen's d = 0.513, at 80° eccentricity. 
Discussion
The main result of Experiment 2 is that, when both categories of target have a high within-category structural similarity, human faces keep their advantage as target both in terms of shorter latency of the first saccade and in accuracy. This suggests that the advantage observed for faces as targets in Experiment 1 did not result only from a bias due to the structural homogeneity of faces (roundness) compared to the two other categories of objects (vehicles and animals). Another interesting result is that colored faces were better categorized than achromatic faces at all visual eccentricities. Though we were careful to use faces from a range of ethnic groups, color might have been used as a cue to detect faces, in addition to their configural homogeneity. We expected an effect of color information with cars as targets since bright colors in cars (e.g., red, yellow, etc.) might capture attention, but this was not the case. No difference in accuracy and in saccade latency was observed between colored and gray-scale cars as targets. A possible explanation is that, in the colored version, many of the cars had subdued colors like gray, brown, black, or beige. Another factor could be that attentional capture by colors is known to be less efficient than attentional capture by motion for instance (Theeuwes, 1995; Yantis & Hillstrom, 1994), especially at large visual eccentricities where cones are sparse. The better performance for cars as distractors when faces were the target and for faces as distractors when the targets were cars, relative to animals as distractors, suggests that structural homogeneity did play a role, perhaps by facilitating the rejection (as distractor) of the structurally similar stimuli. 
General discussion
Some categories of stimuli, like faces and words, seem to exhibit a central field bias. Indeed, several studies have shown that face recognition and reading are more impacted than object and scene perception in patients with central vision loss (Bullimore, Bailey, & Wacker, 1991; Calabrèse et al., 2010; Legge, Rubin, Pelli, & Schleske, 1985; Tejeria, Harper, Artes, & Dickinson, 2002). Moreover, neuroimaging studies in humans and in animals (Levy et al., 2001; Verhoef et al., 2015; however, see Rousselet et al., 2005) have reported that stimuli (letter strings, words, faces) whose recognition largely depends on central vision are mapped mainly within the center-biased representation whereas stimuli whose recognition can be achieved on the basis of coarse information (e.g., buildings) show a peripheral bias. With a saccadic choice task, Crouzet et al. (2010) demonstrated an advantage for the categorization of human faces for which the shorter saccade latencies were observed compared to two other categories of objects (animals and vehicles; see also Fletcher-Watson, Findlay, Leekam, & Benson, 2008, for similar results with human beings as targets). The present study used the same paradigm with the aim of examining whether the advantage of faces over other categories like animals and vehicles is limited to the central visual field. Indeed, in the Crouzet et al. (2010) study photographs were displayed in the macular region at 8.6° (center to center) from fixation. The key finding of Experiment 1 is that, in terms of speed of categorization, human faces maintain their advantage over animals and vehicles across the whole visual field, up to 80° eccentricity. As only coarse visual information, conveyed by low spatial frequencies, is available in peripheral vision, Experiment 2 was designed to test whether the advantage for faces resulted from their structural homogeneity, compared to various shapes for animals and vehicles. To this aim we compared performance for two categories of stimuli with a high degree of within category structural similarity (faces and cars). We observed that faces were categorized faster and more efficiently than cars. This result is consistent with Hershler et al. (2010) who found that human faces are detected more successfully than clocks, dog faces, or cars, and this benefit grows with eccentricity (from 4° to 20° in their study). The face advantage did not grow with eccentricity in the present study and was relatively stable across eccentricities. This difference between the two studies is likely due to the mode of response. Whereas Herschler et al. used manual RTs, we used saccades which are more automatic, faster, and based on low-level oculomotor mechanisms (van der Linden, Mathôt, & Vitu, 2015). 
Hershler et al. (2010) suggested that, in peripheral vision, the face advantage results from some preattentive initial distinction of faces from other stimuli. Automatic attentional capture by human faces, possibly due to their social relevance has also been reported by Fletcher-Watson et al. (2008), Landman et al. (2014), and Lavie, Ro, and Russell (2003). However, the structural homogeneity of faces could potentially contribute to this advantage. Indeed, faces were always presented in the same orientation (front) with all facial features visible while cars were presented in various orientations. Moreover, the distractor effect suggests that the advantage for faces might result in part from their within-category structural similarity. Indeed, when the distractors were categories with structurally heterogeneous shapes (various animals and vehicles in Experiment 1) the diagnostic information of roundness for faces might have facilitated the selection of the target, or their rejection as distracters, particularly at large eccentricities. 
Target categorization was faster with colored than with gray-scale photographs. There is still a debate about the deterioration of color vision in peripheral vision. For some authors color vision is absent above 40° eccentricity (Moreland 1972) while, for others, color vision exists out to at least 45° and even 90° under specific spatial and temporal conditions (Noorlander, Koenderink, den Ouden, & Edens, 1983). Hansen, Pracejus, and Gegenfurtner (2009) reported that chromatic discrimination along the M-L axis of the color space is possible even at large eccentricities up to 50° for stimuli of 8°. Naili et al. (2006) reported that object categorization as edible/non edible was better with colored than with gray level photographs of objects at large eccentricities (30° and 60°). 
The Min SRT was, on average, longer in the present experiment than in the Crouzet et al (2010) study at an equivalent eccentricity (8.6° in the Crouzet et al., 2010, study vs. 10° in the present study). They reported that the shorter latencies were around 100–110 ms for faces while the Min SRT was 140 ms at 10° eccentricity in Experiment 1 and 160 ms at 10°, and even 200 ms in the gray-scale version, in Experiment 2. Several methodological differences can account for this difference. (a) The sampling rate of the eye tracker was 240 Hz in the Crouzet et al. study, 50 Hz here. (b) The stimuli always occurred at the same eccentricity in the Crouzet et al. study whereas, in the present study, participants were required to make saccades at a full range of eccentricities, and thus could not preprogram the length of the saccade. (c) We eliminated outliers whereas Crouzet et al. kept all the data. 
Crouzet et al. (2010, experiment 2) showed that the bias toward saccading to human faces was difficult to suppress when faces were distractors. In their study, when asked to saccade toward a vehicle, participants still exhibited a clear tendency (29% of the data) for the fastest saccades to be directed toward faces. We replicated, only in part, this automatic capture of attention by faces. Indeed, the interference from faces as distractors decreased with the increase in eccentricity and it occurred when the target was another biological category (animals), not with vehicles or cars as targets where, on the contrary, the selection of the target was facilitated more when the distractor was a face than when it was an animal, particularly at large eccentricities. As only coarse information is available at large eccentricities, this result suggests that low-level features specific to faces facilitated their elimination as nontarget at large eccentricities. Another potential explanation is that, in the Crouzet et al. (2010) study, the interference of faces as distractors was only an issue when the saccades were very fast. With slower saccades, participants may control interference better; and since, in the current experiments, saccade latencies were globally slower than in the Crouzet et al. (2010) study, this leaves more time to overcome the tendency to make errors to face distractors. 
For Herschler and colleagues (Hershler et al., 2010; Hershler & Hochstein, 2005) the strong advantage for faces compared to other categories of objects sharing a high degree of within-category structural similarity (e.g., dog faces, clock faces, and cars) under spread attention conditions indicates some preattentive initial distinction for human faces. They suggested that the general face advantage in peripheral vision may influence the size of the attentional window within which objects can be perceived when searching through an array. For Hershler and Hochstein (2005) the generalization across many instances of the high-level face concept and across many different distractors suggests that this rapid, parallel search mechanism reflects properties of high-level, rather than low-level, visual cortical areas. They suggest that the result is compatible with the Reverse Hierarchy Theory (Hochstein & Ahissar, 2002) which claims that feature search is not limited to basic, low- level features, but instead reflects high-level cortical activity, based on large, spread-attention receptive fields. 
Conclusion
Many visual functions have been found to degrade with increasing distance from the fovea (for a review, see Strasburger et al., 2011). It has been suggested that visual processes that require higher brain functions (beyond V1) fall more rapidly with retinal eccentricity, suggesting that the decline in visual functions with increasing eccentricity is representative of the complexity of visual information processing (Levi, Klein, & Aitsebaomo, 1985). The key findings of the present study are (a) that the advantage for human faces over other categories of objects persists at very large eccentricities of up to 80° and (b) that various objects (animals, vehicles, faces, and cars) embedded in scenes can be categorized above chance and in less than 450 ms at 80° eccentricity. The advantage for faces was seen all over the visual field, even when compared to another category with a high degree of within category structural similarity (cars). This suggests, in agreement with Rousselet et al. (2005), that the bias for faces is not limited to central vision, at least in this sort of categorization task. A central bias might occur in tasks requiring finer discrimination like identification or discrimination of facial expression. 
Acknowledgments
The study was funded by a grant from the French National Research Agency (ANR Lowvision) to the first author and the ANR-NSF Program for Collaborative Research in Computational Neuroscience (CRCNS) to the last author. 
Commercial relationships: none. 
Corresponding author: Muriel V. Boucart. 
Email: Muriel.Boucart@chru-lille.fr. 
Address: SCALab—Sciences Cognitives et Sciences Affectives, Université Lille, Lille, France. 
References
Awasthi B., Friedman J., Williams M. A. (2011). Processing of low spatial frequency faces at periphery in choice reaching tasks. Neuropsychologia, 49 (7), 2136–2141.
Bayle D. J., Schoendorff B., Hénaff M. A., Krolak-Salmon P. (2011). Emotional facial expression detection in the peripheral visual field. PLoS One, 6 (6), e21584.
Boucart M., Moroni C., Szaffarczyk S., Tran T. H. C. (2013a). Implicit processing of scene context in macular degeneration. Investigative Ophthalmology and Visual Science, 54 (3), 1950–1957. [PubMed] [Article]
Boucart M., Moroni C., Thibault M., Szaffarczyk S., Greene M. (2013b). Scene categorization at large visual eccentricity. Vision Research, 86, 35–42.
Boucart M., Naili F., Despretz P., Defoort S., Fabre-Thorpe M. (2010). Implicit and explicit object recognition at very large visual eccentricities: No improvement after loss of central vision. Visual Cognition, 18 (6), 839–858.
Bullimore M. A., Bailey I. L., Wacker R. T. (1991). Face recognition in age-related maculopathy. Investigative Ophthalmology and Visual Science, 32, 2020–2029. [PubMed] [Article]
Calabrèse A., Bernard J. B., Hoffart L., Faure G., Barouch F., Conrath J., Castet E. (2010). Small effect of interline spacing on maximal reading speed in low-vision patients with central field loss irrespective of scotoma size. Investigative Ophthalmology and Visual Science, 51 (2), 1247–1254. [PubMed] [Article]
Cannon M. W. (1985). Perceived contrast in the fovea and periphery. Journal of the Optical Society of America, A2, 1760–1768.
Crouzet S. M., Kirchner H., Thorpe S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10 (4): 16, 1–17, doi:10.1167/10.4.16. [PubMed] [Article]
Fletcher-Watson S., Findlay J. M., Leekam S. R., Benson V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37 (4), 571–583.
Greene M. R., Oliva A. (2009). Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive Psychology, 58 (2), 137–179.
Hansen T., Pracejus L., Gegenfurtner K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9 (4): 26, 1–12, doi:10.1167/9.4.26. [PubMed] [Article]
Hasson U., Harel M., Levy I., Malach R. (2003). Large-scale mirror-symmetry organization of human occipito-temporal object areas. Neuron, 37 (6), 1027–1041.
Haxby J. V., Gobbini M. I., Furey M. L., Ishai A., Schouten J. L., Pietrini P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293 (5539), 2425–2430.
Henderson J. M., Anes M. D. (1994). Roles of object-file review and type priming in visual identification within and across eye fixations. Journal of Experimental Psychology: Human Perception & Performance, 1994 (4), 826–839.
Hershler O., Golan T., Bentin S., Hochstein S. (2010). The wide window of face detection. Journal of Vision, 10 (10): 21, 1–14, doi:10.1167/10.10.21. [PubMed] [Article]
Hershler O., Hochstein S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, (13):1707-1724.
Hochstein S., Ahissar M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36 (5), 791–804.
Ito M., Tamura H., Fujita I., Tanaka K. (1995). Size and position invariance of neuronal responses in monkey inferotemporal cortex. Journal of Comparative Neurology, 73 (1), 218–226.
Jebara N., Pins D., Despretz P., Boucart M. (2009). Face or building superiority in peripheral vision reversed by task requirements. Advances in Cognitive Psychology, 5, 42–53.
Kanwisher N. (2001). Faces and places: Of central (and peripheral) interest. Nature Neuroscience, 2001 (5), 455–456.
Landman R., Sharma J., Sur M., Desimone R. (2014). Effect of distracting faces on visual selective attention in the monkey. Proceedings of the National Academy of Sciences, USA, 111 (50), 18037–18042.
Larson A. M., Loschky L. C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9 (10): 6, 1–16, doi:10.1167/9.10.6. [PubMed] [Article]
Lavie N., Ro T., Russell C. (2003). The role of perceptual load in processing distractor faces. Psychological Science, 14 (5), 510–515.
Legge G. E., Rubin G. S., Pelli D. G., Schleske M. M. (1985). Psychophysics of reading--II. Low vision. Vision Research, 25 (2), 253–265.
Levi D. M. (2008). Crowding—an essential bottleneck for object recognition: A mini review. Vision Research, 48 (5), 635–654.
Levi D. M., Klein S. A., Aitsebaomo A. P. (1985). Vernier acuity, crowding and cortical magnification. Vision Research, 25 (7), 963–977.
Levy I., Hasson U., Avidan G., Hendler T., Malach R. (2001). Center-periphery organization of human object areas. Nature Neuroscience, 4, 533–539.
Logothetis N. K., Pauls J., Poggio T. (1995). Shape representation in the inferior temporal cortex of monkeys. Current Biology, 5 (5), 552–63.
Loschky L. C., Boucart M., Szaffarczyk S., Beugnet C., Johnson A., Tang J. L. (2015). The contributions of central and peripheral vision to scene gist recognition with a 180° visual field. Journal of Vision, 15 (12): 570, doi:10.1167/15.12.570. [Abstract]
Mace M. J., Joubert O.R. Nespoulous J. L. , & Fabre-Thorpe M. (2009). The time-course of visual categorizations: You spot the animal faster than the bird. PLoS ONE, 4 (6), e5927.
Mäkelä P., Nasanen R., Rovamo J., Melmoth D. (2001). Identification of facial images in peripheral vision. Vision Research, 41, 599–610.
Martelli M., Majaj N. J., Pelli D. G. (2005). Are faces processed like words? A diagnostic test for recognition by parts. Journal of Vision, 5 (1): 6, 58–70, doi:10.1167/5.1.6. [PubMed] [Article]
McKone E. (2004). Isolating the special component of face recognition: peripheral identification and a Mooney face. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1):181–197.
Moreland J. D. (1972). Peripheral colour vision. In Jameson D. Hurvich L. M. (Eds.) Handbook of sensory physiology: Vol. VII/4. Visual psychophysics (pp. 517–536). New York: Springer.
Naili, F., Despretz P., Boucart M. (2006). Colour recognition at large visual eccentricities in normal observers and patients with low vision. Neuroreport, 17 (15), 1571–1574.
Noorlander C., Koenderink J. J., den Ouden R. J., Edens B. W. (1983). Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research, 23 (1), 1–11.
Palmeri T. J., Gauthier I. (2004). Visual object understanding. Nature Reviews Neuroscience, 5 (4), 291–303.
Pelli D. G. (2008). Crowding: A cortical constraint on object recognition. Current Opinion in Neurobiology, 18 (4), 445–451.
Poncet M., Fabre-Thorpe M. (2014). Stimulus duration and diversity do not reverse the advantage for superordinate-level representations: The animal is seen before the bird. European Journal of Neuroscience 39 (9), 1508.
Rayner K., Slattery T. J., Drieghe D., Liversedge S. P. (2011). Eye movements and word skipping during reading: Effects of word length and predictability. Journal of Experimental Psychology: Human Perception & Performance, 37 (2), 514–528.
Richler J. J., Gauthier I., Palmeri T. J. (2011). Automaticity of basic-level categorization accounts for labeling effects in visual recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37 (6), 1579–1587.
Rosch E., Mervis C. B., Gray W. D., Johnson D., Boyes-Braem P. (1976). Basic objects in natural categories. Cognitive Psychology, 8 (3), 382–439.
Rosenholtz R., Huang J., Raj A., Balas B. J., Ilie L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of Vision, 12 (4): 14, 1–17, doi:10.1167/12.4.14. [PubMed] [Article]
Rousselet G. A., Husk J. S., Bennett P. J., Sekuler A. B. (2005). Spatial scaling factors explain eccentricity effects on face ERPs. Journal of Vision, 5 (10): 1, 755–763, doi:10.1167/7.5.10. [PubMed] [Article]
Rovamo J., Virsu V. (1979). An estimation and application of the human cortical magnification factor. Experimental Brain Research, 37 (3), 495–510.
Serre T., Wolf L., Bileschi S., Riesenhuber M., Poggio T. (2007). Robust object recognition with cortex-like mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29 (3), 411–426.
Strasburger H., Rentschler I., Harvey L. O.,Jr. (1994). Cortical magnification theory fails to predict visual recognition. European Journal of Neuroscience, 6 (10), 1583–1587.
Strasburger H., Rentschler I., Jüttner M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11 (5): 13, 1–82, doi:10.1167/11.5.13. [PubMed] [Article]
Tejeria L., Harper R. A., Artes P. H., Dickinson C. M. (2002). Face recognition in age related macular degeneration: Perceived disability, measured disability, and performance with a bioptic device. British Journal of Ophthalmology, 86 (9), 1019–1026.
Theeuwes J. (1995). Abrupt luminance change pops out; abrupt color change does not. Perception & Psychophysics, 57 (5), 637–644.
Thorpe S. J., Gegenfurtner K. R., Fabre-Thorpe M., Bulthoff H. H. (2001). Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience, 14, 869–876.
Tovee M. J., Rolls E. T., Azzopardi P. (1994). Translation invariance in the responses to faces of single neurons in the temporal visual cortical areas of the alert macaque. Journal of Neurophysiology, 72 (3), 1049–1060.
Tsunoda K., Yamane Y., Nishizaki M., Tanifuji M. (2001). Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nature Neuroscience, 4 (8), 832–838.
Tukey J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley Publishing Company.
van der Linden L., Mathôt S., Vitu F. (2015). The role of object affordances and center of gravity in eye movements toward isolated daily-life objects. Journal of Vision, 15 (5): 8, 1–18, doi:10.1167/15.5.8. [PubMed] [Article]
Verhoef B. E., Bohon K. S., Conway B. R. (2015). Functional architecture for disparity in macaque inferior temporal cortex and its relationship to the architecture for faces, color, scenes, and visual field. TheJournal of Neuroscience,35 (17), 6952–6968.
Wang B., Yan T., Wu J., Chen K., Imajyo S., Ohno S., Kanazawa S. (2013). Regional neural response differences in the determination of faces or houses positioned in a wide visual field. PLoS One, 8 (8), e72728.
Wu C. T., Crouzet S. M., Thorpe S. J., Fabre-Thorpe M. (2015). At 120 msec you can spot the animal but you don't yet know it's a dog. Journal of Cognitive Neuroscience, 27 (1), 141–149.
Yantis S., Hillstrom A. P. (1994). Stimulus-driven attentional capture: Evidence from equiluminant visual objects. Journal of Experimental Psychology: Human Perception & Performance, 20 (1), 95–107.
Figure 1
 
Examples of colored faces, animals, and vehicles used in Experiment 1 (three top rows) and examples of colored and gray-scale cars, faces, and animals used in Experiment 2 (two bottom rows).
Figure 1
 
Examples of colored faces, animals, and vehicles used in Experiment 1 (three top rows) and examples of colored and gray-scale cars, faces, and animals used in Experiment 2 (two bottom rows).
Figure 3
 
Percentage of correct saccades (and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 3
 
Percentage of correct saccades (and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 4
 
Mean correct saccadic latencies (mean SRT and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 4
 
Mean correct saccadic latencies (mean SRT and standard errors) for faces (a), vehicles (b), and animals (c), used as targets as a function of eccentricity and their specific distractors in Experiment 1.
Figure 5
 
Minimum saccadic response time (Min SRT) for each target (averaged over distractors) as a function of eccentricity in Experiment 1.
Figure 5
 
Minimum saccadic response time (Min SRT) for each target (averaged over distractors) as a function of eccentricity in Experiment 1.
Figure 6
 
Percentage of correct saccades (and standard errors) for colored and gray-scale faces and cars used as targets as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 6
 
Percentage of correct saccades (and standard errors) for colored and gray-scale faces and cars used as targets as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 7
 
Mean correct saccade latencies (mean SRT and standard errors) for colored and gray-scale faces and cars used as target as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 7
 
Mean correct saccade latencies (mean SRT and standard errors) for colored and gray-scale faces and cars used as target as a function of eccentricity and their specific distractors in Experiment 2. C = colored, G = gray-scale.
Figure 8
 
Minimum saccadic response time (Min SRT) for colored (C) and gray-scale (G) target faces and cars (averaged over distractors) as a function of eccentricity in Experiment 2.
Figure 8
 
Minimum saccadic response time (Min SRT) for colored (C) and gray-scale (G) target faces and cars (averaged over distractors) as a function of eccentricity in Experiment 2.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×