The latency of the first saccade was shorter for human faces as targets than for animals and vehicles as targets. This result replicates Crouzet et al. (
2010) data but extends this finding by showing that shorter latencies for faces occur across the whole visual field (from 10° to 80°). The results of
Experiment 1 also show that accuracy was above chance for the three categories of targets at 80° eccentricity. This result is in agreement with
Boucart et al. (2013b) who reported a performance above chance at 70° eccentricity in a categorization task on scenes. However, in contrast to scene categorization, which can be based on global information (Greene & Oliva,
2009), we show that observers are able to categorize a local object (face, animal, vehicle) within a scene at very large eccentricities (see also Thorpe et al.,
2001, for the categorization of an animal within a scene at 75° eccentricity) in spite of peripheral vision being very sensitive to crowding (Levi,
2008; Pelli,
2008; Strasburger, Rentschler, & Jüttner,
2011). It can be argued that the shorter saccade latencies for faces could be explained if the basic level category (faces) had a faster access to semantic representations than the two superordinate categories (vehicles and animals). Indeed, one prominent view on object recognition is that the basic level is accessed before the superordinate level of categorization (Palmeri & Gauthier,
2004; Richler, Gauthier, & Palmeri,
2011; Rosch, Mervis, Gray, Johnson, & Boyes-Braem,
1976). However, this effect has mostly been reported in naming tasks and with relatively long exposure times. Studies using a rapid categorization task, as is the case here, have reported shorter response times for superordinate (e.g., animals) than for basic level (e.g., dogs, birds) levels of categorization (Mace, Joubert, Nespoulous, & Fabre-Thorpe,
2009; Poncet & Fabre-Thorpe,
2014; Wu, Crouzet, Thorpe, & Fabre-Thorpe,
2015). The present advantage for faces in terms of saccade latencies, thus, cannot be accounted for by the level of category. On average, we observed no significant difference in accuracy between the three categories of targets. Crouzet et al. (
2010) reported a higher accuracy for faces than for animals and vehicles. However, as can be seen from
Figure 3a through
c, accuracy was better for faces and for animals than for vehicles at 10° eccentricity, a result which is consistent with Crouzet et al. (
2010) who used an eccentricity of 8.6°. The type of distractor affected target selection in the three target categories. With faces as targets, animals were more detrimental than vehicles in the selection of the target suggesting a higher interference within biological categories than between biological and man-made categories. This may be because two biological categories share more physical features (e.g., the human's and the animal's facial features) than a biological and a man-made category. Indeed, the distributed representation model of brain organization proposes that the visual system is organized to extract generic visual features necessary for object recognition, and that objects are represented by combinations of these features (Haxby et al.,
2001; Serre, Wolf, Bileschi, Riesenhuber, & Poggio,
2007; Tsunoda, Yamane, Nishizaki, & Tanifuji,
2001). With animals as targets, human faces as distractors interfered more than vehicles as distractors on target selection; but this interference decreased with the increase in eccentricity, suggesting that human faces capture attention automatically only when the spatial resolution allows the recognition of the object as a face. With vehicles as targets, accuracy was better when distractors were faces than when they were animals at 20° and above, suggesting that the side of the screen containing diagnostic face information (a round shape) was easier to eliminate as nontarget.