Open Access
Article  |   August 2022
High-level visual search in children with autism
Author Affiliations
  • Safa'a Abassi Abu Rukab
    ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
    safa.abassi@gmail.com
  • Noam Khayat
    ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
    noamkhayat@gmail.com
  • Shaul Hochstein
    ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
    shaulhochstein@gmail.com
Journal of Vision August 2022, Vol.22, 6. doi:https://doi.org/10.1167/jov.22.9.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Safa'a Abassi Abu Rukab, Noam Khayat, Shaul Hochstein; High-level visual search in children with autism. Journal of Vision 2022;22(9):6. https://doi.org/10.1167/jov.22.9.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search has been classified as easy feature search, with rapid target detection and little set size dependence, versus slower difficult search with focused attention, with set size–dependent speed. Reverse hierarchy theory attributes these classes to rapid high cortical-level vision at a glance versus low-level vision with scrutiny, attributing easy search to high-level representations. Accordingly, faces “pop out” of heterogeneous object photographs. Individuals with autism have difficulties recognizing faces, and we now asked if this disability disturbs their search for faces. We compare search times and set size slopes for children with autism spectrum disorders (ASDs) and those with neurotypical development (NT) when searching for faces. Human face targets were found rapidly, with shallow set size slopes. The between-group difference between slopes (18.8 vs. 11.3 ms/item) is significant, suggesting that faces may not “pop out” as easily, but in our view does not warrant classifying ASD face search as categorically different from that of NT children. We also tested search for different target categories, dog and lion faces, and nonface basic categories, cars and houses. The ASD group was generally a bit slower than the NT group, and their slopes were somewhat steeper. Nevertheless, the overall dependencies on target category were similar: human face search fastest, nonface categories slowest, and dog and lion faces in between. We conclude that autism may spare vision at a glance, including face detection, despite its reported effects on face recognition, which may require vision with scrutiny. This dichotomy is consistent with the two perceptual modes suggested by reverse hierarchy theory.

Background and introduction
Visual search
One of the most studied cognitive behaviors is visual search. Anne Treisman introduced classification of easy, “preattentive” feature search (e.g., search for an element that differs greatly from distractors in a simple feature), which leads to rapid target “pop-out,” with little dependence on set size (the number of search display items), versus more difficult search with focused attention (e.g., search for a conjunction of features), where target detection speed is a function of set size (Treisman & Gelade, 1980; see Hochstein, 2020; Wolfe, 2018), although there may be a continuum between serial and parallel search (Wolfe, 2021; Wolfe, Cave, & Franzel, 1989). 
This search-type dichotomy has also been found in children. While both search types are substantially slower early in life, slowing is more pronounced in difficult conjunction search compared to easy feature search. Thus, children's conjunction searches are slower and depend more on set size than those of adults, perhaps related to immature top-down attentional control (Adler & Orprecio, 2006; Day, 1978; Donnelly et al., 2007; Gerhardstein & Rovee-Collier, 2002; Hommel, Li, & Li, 2004; Merrill & Conners, 2013; Michael, Lété, & Ducrot, 2013; Thompson & Massaro, 1989; Trick & Enns, 1998; Woods et al., 2013). Nevertheless, even feature search is slower in children. For example, Donnelly et al. (2007) found that feature search was two to three times slower in 6- to 7-year-olds and 9- to 10-year-olds than in adults and set size slope decreased with age for conjunction search from 102 to 37 and 30 ms/item for the three participant groups, compared to the 6- to 10-ms/item set size slope often quoted for adult feature search (Hershler & Hochstein, 2005; Treisman & Gelade, 1980; Treisman & Souther, 1985; Wolfe, 1998). 
Reverse hierarchy theory
Finding that both perceptual learning (Ahissar & Hochstein, 1997, 2004) and conscious vision (Hochstein & Ahissar, 2002) are initiated through representations at higher cortical levels, these authors suggested reverse hierarchy theory (RHT), whereby generalized learning and initial “vision at a glance” depend on higher cortical representations and that these levels guide later hard-condition detailed learning and detailed “vision with scrutiny,” which relate to lower cortical representations. Thus, although implicit, unconscious visual information processing is hierarchical, and conscious perception is dichotomous, with access first to high-level cortical-level representations, reflecting global attention, and only later, through top-down guidance and focused attention, to lower cortical-level details. A somewhat counterintuitive implication of RHT is that, since feature search pop-out is fast and easy, this search type is naturally associated with higher cortical-level representations, which are accessible to consciousness earlier (see also guided search, Wolfe & Horowitz, 2017). Consistent with this conclusion, visual search can be efficient when searching for a target category (e.g., numbers among letters; Egeth, Jonides, & Wall, 1972; see also Alexander & Zelinsky, 2011; Wu et al., 2013; Yang & Zelinsky, 2009), and a high-level pop-out effect was found for faces (Hershler & Hochstein, 2005, 2006; see also Simpson et al., 2019), as well as for other targets of individual expertise, indicating a preference for favored categories (Hershler & Hochstein, 2009). 
Autism and visual search
Autism spectrum disorders (ASDs) are characterized by deficits in social and communicative skills, such as imitation, pragmatic language, theory of mind, and empathy (DSM-5; American Psychiatric Association, 2013; Baron-Cohen, 1995; Frith, 2001; Stevenson et al., 2019; Weigelt, Koldewyn, & Kanwisher, 2012). 
It has been found that individuals with ASDs process sensory stimuli differently than neurotypically developed individuals (DSM-5; American Psychiatric Association 2013; Burns et al., 2017; Dellapiazza et al., 2018; DuBois et al., 2017). There is considerable debate considering the local versus global processing source of these differences (Gadgil et al., 2013), and a “weak central coherence” theory was developed (Booth & Happe, 2018; Frith & Happe, 1994; Happé & Booth, 2008; Happé & Frith, 2006; Lieder et al., 2019), with suggestions of a disinclination rather than a disability to process globally (Koldewyn et al., 2013). 
Another area of conflicting reports concerns visual search in children with autism spectrum disorders. Some studies found ASD search superiority deriving from anomalously enhanced perception of stimulus features, which was in turn positively associated with autism symptom severity (Gliga et al., 2015; Joseph et al., 2009; O'Riordan & Plaisted, 2001; O'Riordan, Plaisted, Driver, & Baron-Cohen, 2001; Plaisted, O'Riordan, & Baron-Cohen, 1998). This advantage may be supported by an attentional (i.e., overfocusing, restricted interests) rather than a perceptual explanation (Kaldy et al., 2016). Other studies, however, found slower search in complex search conditions (Doherty et al., 2018; Keehn & Joseph, 2016) or no significant relation between autistic traits and visual search (Bott et al., 2006; Marciano et al., 2021; Pérez et al., 2019). Some of these discrepancies may be related to testing different groups of ASD individuals (e.g., Lindor, Rinehart, & Fielding, 2018). Another source of discrepancy may be the selective impact of specific perceptual deficits. We referred above to an important categorical difference between feature search advantage and conjunction search disadvantage (see “visual search” section above describing Anne Treisman's differentiation between these two modes; see also Keehn & Joseph, 2016). This difference may be related to the differences in processing levels and mechanisms, referred to above (Hochstein & Ahissar, 2002; Treisman & Gelade, 1980). We shall suggest this difference in mechanisms between vision at a glance and vision with scrutiny (see reverse hierarchy theory section above) underlies spared versus impaired vision more generally in special individuals (e.g., Hochstein et al., 2015; Pavlovskaya et al., 2002, 2005). 
Faces and ASDs
Faces are perhaps the most important social stimulus and are essential for human communication and social interaction. Face recognition mechanisms obtain a continuous stream of information ranging from communicative gestures to emotional and attentive states (Leopold & Rhodes, 2010). A rapid glimpse of an individual's face informs us about identity, race, emotion, age, sex, and gaze direction. Human newborns without visual experience have a tendency to track moving stimuli and respond greater to a proper face than to scrambled versions of the same stimuli (Goren et al., 1975). Pictures of faces, presented together with images of other complex objects, capture and maintain attention of both adults and 6-month-olds, although not of 3-month-olds (Di Giorgio, Turati, Altoè, & Simion, 2012; Gliga, Elsabbagh, Andravizou, & Johnson, 2009). Saccades to face stimuli can be as rapid as 100 ms (Crouzet, Kirchner, & Thorpe, 2010) and to familiar faces, when paired with unfamiliar ones, as fast as 180 ms (Visconti di Oleggio Castello & Gobbini, 2015). Thus, it is perhaps not surprising that, consistent with these findings, face photographs pop out from among photographs of other objects, as described above (Hershler & Hochstein, 2005, 2006). 
It has been widely reported that children with ASDs have difficulties with face recognition. Individuals with ASDs exhibit selective deficits in their ability to recognize facial identities and expressions, although the source of their face impairment is, as yet, undetermined. 
Deficits in face recognition have been implicated as at the core of social impairments of people with ASD (Dawson et al., 2005; Schultz, 2005), although reports are mixed (Dawson et al., 2005; Golarai et al., 2006; Jemel et al., 2006; Marcus & Nelson, 2001; Minio-Paluello et al., 2020; Pierce & Courchesne, 2000; Sasson, 2006; Simmons et al., 2009). Many individuals with ASDs demonstrate impairments in facial emotion recognition (FER), and the DSM-5 diagnostic criteria for ASDs include items related to “deficits in nonverbal communicative behaviors used for social interaction, ranging, for example, from poorly integrated verbal and nonverbal communication; to abnormalities in eye contact and body language or deficits in understanding and use of gestures; to a total lack of facial expressions and nonverbal communication” and “deficits in social-emotional reciprocity” (American Psychiatric Association, 2013). However, findings on FER in ASDs are also inconsistent: Some studies found profound deficits, while others found intact FER, perhaps due to compensatory mechanisms. For example, some high-functioning individuals with ASDs might use explicit cognitive or verbally mediated processes to recognize emotions, in contrast to more automatic emotion processing in typically developing individuals (Harms, Martin, & Wallace, 2010). 
It has been argued that atypical face perception in ASDs could be due to a lack of holistic processing or due to a local processing bias. Tanaka and Sung (2016) consider three possible accounts of the autism face deficit: (1) the holistic hypothesis, (2) the local perceptual bias hypothesis, and (3) the eye avoidance hypothesis. On the other hand, a recent review found “no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism, [though] quantitatively—i.e., how well facial identity is remembered or discriminated—people with autism perform worse than typical individuals.” (Weigelt, Koldewyn, & Kanwisher, 2012). 
There are less data regarding face detection in individuals with ASDs. Responses to familiar faces, newly familiar faces, and novel faces as assessed by evoked response potentials (ERPs) are intact in adults with autism spectrum disorders (Webb et al., 2010), although some reports show that detection rates are lower and processing is slower (Naumann et al., 2018). Nevertheless, when presented with arrays of various items, infants at risk for autism have a greater tendency to select and sustain attention to faces (Elsabbagh et al., 2013). Similarly, it was found that children with ASDs attend to faces as do neurotypically developed children (Fischer et al., 2014). 
Face detection has been associated with the electroencephalogram N170 wave, a posterior-temporal component that peaks at 130–190 ms following presentation of face stimuli (Bentin et al., 1996), responding to faces on a categorical rather than an individual level (e.g., Eimer, 2000, Herzmann et al., 2004, Tanaka et al., 2006; but see Caharel et al., 2005, Jemel et al., 2010). Several studies found a slowed N170 response to novel faces in individuals with ASDs (McPartland et al., 2004, O'Connor et al., 2007), suggesting differences in the structural phase of face processing that may be correlated with face recognition skills (McPartland et al., 2004). Recent work suggested that latency differences are not observed when attention is directed to the eye region, although subtle atypicalities in holistic or configural processing may remain (Webb et al., 2010). 
Current study
In Experiment 1 of the current study, we test 23 children with ASDs, as well as 23 neurotypically (NT) developed children, on a specialized face detection task. We present arrays of 4–64 photographs on a computer touchscreen, each depicting a single object. Figure 1 (top) shows examples of the stimuli used. Participants are asked to touch the face photograph quickly. In Experiment 2, search was for another named target object, a lion or dog face, or another basic level category, car or house. Figure 1 (bottom) shows examples of these stimuli. In each block of trials, a single target category was to be touched. Participants were urged to be accurate and rapid. 
Figure 1.
 
Examples of displays.
Figure 1.
 
Examples of displays.
Our general conclusion is that children with ASDs are about as fast as NT children and, like them, are much faster at finding a human face than finding other categories. Nevertheless, they still show a small but significantly larger set size dependence in face detection than do the NT children. We will suggest that this differentiation is related to the RHT mechanism dichotomy (see Summary and Discussion). A preliminary report of this study was presented to the Vision Sciences Society, 2021 (Abassi Abu Rukab, Khayat, & Hochstein, 2021). 
Methods
Participants
We tested 23 NT children (average ± standard deviation of age: 14.6 ± 3.0) and 23 children with ASD (14.9 ± 2.8), matched for age, as well as IQ, as follows. Participants were administered a set of cognitive assessments, which evaluated general reasoning skills by the standard Block Design task (block design, WISC-III or WAIS-IV; Wechsler, 1997, 2003) finding ASD 27.2 ± 10.6 (7.7 ± 3.1 scaled) versus NT 32.5 ± 8.5 (9.0 ± 1.9 scaled); t tests, p = 0.095 and p = 0.071, respectively. Average Autism Spectrum Quotient test (AQ), as measured by school staff and caregivers under observation of the experimenter, was significantly different between the ASD and NT groups (27.2 ± 5.7 and 13.4 ± 3.4, respectively; t test, p < 0.001). The children with ASDs were students in a special-needs school in East Jerusalem, which admits only children diagnosed with autism, as determined by a national committee on special education criteria. The NT children were from neighboring communities. Some special-needs students were more high functioning than others, as reflected also in their AQ scores. We excluded from participation any student with a motor deficiency, as judged by a school occupational therapist. Data of one student were excluded as his responses were very slow and out of the range of the other participants. 
Ethics permission was obtained from the Israel Ministry of Education, Chief Scientist, and from the school principal and parents. Individual data are tabulated in Table 1 (NT) and Table 2 (ASD). All data are stored without participant identification. 
Table 1.
 
Data for neurotypical children. Averaged data include standard deviations.
Table 1.
 
Data for neurotypical children. Averaged data include standard deviations.
Table 2.
 
Data for children with ASD. Averaged data include standard deviations.
Table 2.
 
Data for children with ASD. Averaged data include standard deviations.
Stimuli
Displays with 4 (2 × 2), 16 (4 × 4), 36 (6 × 6), or 64 (8 × 8) images of different category objects were presented, including one target picture of a human face (Experiment 1), or of a car, house, dog face, or lion face (Experiment 2). Each image size was 1.80 × 1.80 cm (visual angle of ∼2.5° based on an estimated distance of 40 cm from the screen), and spacing between images was 2.5 mm. A target image was always present; response time (RT) was measured from display presentation to time of participant touching the target item on the touchscreen. The following link is to a video of a child performing the task (https://youtube.com/shorts/PqCj-xbDgNk?). 
Apparatus
Displays were presented to the children on a HP laptop with a 14-in. touchscreen (1,920 × 1,080 pixels). The laptop screen could be rotated vertically around so that the keyboard was invisible. All participants used the same touchscreen laptop computer for these experiments, so that conditions were equivalent for all. Surrounding environment was a quiet room, usually in the school, and controlled to be similar for all, as well. 
Data analysis
We performed mixed analysis of variance (ANOVA) tests for RTs of different set sizes and the two participant groups, looking also for cross-terms, indicating possible differences in set size slope. Additionally, post hoc student t tests were performed between participant groups for RTs, human face search slopes, and AQ and IQ scores as well. 
Experiment 1
Task and experimental design
Participants were presented with an array of photographs and asked to find and quickly touch the picture of a human face while not touching other pictures. The experiment had 40 trials, going gradually from array size 2 × 2 to 4 × 4, 6 × 6, and, finally, 8 × 8 images, with 10 trials per array size. Before beginning the session, five training trials were administered, displaying a single centered image of a face, the target category, and the child was asked to touch the image. This ensured that the child understood the task and the nature of the target category. Images always had the same image size and same interitem spacing, and they were always clustered around the center of the screen, where a fixation cross appeared before each trial. Targets were randomly placed in the search array. If the target was not found within 30 seconds, the trial was aborted and excluded from the analysis. If an incorrect picture was touched, we allowed another 20 seconds for a second touch on the correct picture; otherwise, the response was deemed incorrect. The occurrence of incorrect trials was less than one/thousand and, for double touches, less than 2%. There were no errors for human face detection trials. 
All images were chosen from the Google image database. Figure 1 (top) shows examples of the stimulus displays, with different numbers of images and different targets. 
Results
We tested target detection in children with ASDs in a special-needs school, as well as age/IQ-matched NT controls. Tables 1 and 2 in the Methods section show gender, age, IQ, and AQ for each child in the two groups, respectively. 
The central goal of the current study is visual search for human faces. Does the reported deficit in face recognition of children with ASDs imply that they will have difficulty in face detectionFigure 2 compares reaction time versus set size plots for the NT (blue) and ASD (orange) participant groups. The left column of Table 3 presents average RTs for children with ASDs and those with NT, as well as intercept and slope of the Figure 2 plots. A 2 × 4 mixed ANOVA of RTs with group as the between-subject variable and set size in human face search task as the within-subject variable revealed significant main effects of both group (F(1, 44) = 13.7, p < 0.001) and image set size (F(3, 132) = 180.3, p < 0.001). There is also a significant interaction effect of Group × Set Size (F(3, 132) = 13.3, p < 0.001), reflecting the larger dependence on set size for the ASD group or, equivalently, the larger between-group RT difference for larger set size (see Figure 2). The important result is that the set size slope for faces—the goal of the current study—is significantly different for the participant groups, being 11.3 ± 0.8 ms/item for the NT group and 18.8 ± 1.4 ms/item for the ASD group (t test, p < 0.001) with a large effect size (Cohen's d = 1.31). If we take into account only the three larger set size points, the slopes are reduced and more similar (NT: 9.9 ± 4.6 ms/item; ASD: 14.4 ± 6.5 ms/item), although still significantly different (t test, p < 0.005) with a large effect size (Cohen's d = 1.27). Thus, a major difference between the ASD and NT groups was in their detection of human faces, with a significantly greater slope for the ASD group, suggesting that for them, faces may not “pop out” as easily as they do for the NT group. 
Figure 2.
 
Response times of participants as a function of image set sizes showing group slopes in the human face visual search task for ASD and NT children groups. Each circle represents average response time (RT) of a single participant; horizontal lines indicate the group average RT; error bars represent standard error of the mean; solid line trendlines represent slopes of all data; dashed line trendlines represent slope of data from set sizes 16 to 64 (but displayed from set size 4). Data are shifted horizontally to facilitate visualization.
Figure 2.
 
Response times of participants as a function of image set sizes showing group slopes in the human face visual search task for ASD and NT children groups. Each circle represents average response time (RT) of a single participant; horizontal lines indicate the group average RT; error bars represent standard error of the mean; solid line trendlines represent slopes of all data; dashed line trendlines represent slope of data from set sizes 16 to 64 (but displayed from set size 4). Data are shifted horizontally to facilitate visualization.
Table 3.
 
Response time (RT) and set size dependence for the ASD and NT groups, for different target categories. RT is fastest and set slope lowest for face targets, for both participant groups. All values are averages and standard errors.
Table 3.
 
Response time (RT) and set size dependence for the ASD and NT groups, for different target categories. RT is fastest and set slope lowest for face targets, for both participant groups. All values are averages and standard errors.
This difference is especially noteworthy because it brings the NT group clearly into the domain of feature search “pop-out,” as found previously by Hershler and Hochstein (2005, 2006), with the slight increase in set size slope being expected for children (see Donnelly et al., 2007). The inclusion of the somewhat steeper set size slope for the ASD group into the “pop-out” category is more questionable and is analyzed in the Discussion. Note that the data for the two groups are identical for the test with 4 (2 × 2) display items, indicating that the ASD group does not have a difficulty with touchscreen responses and supporting the conclusion that differences for larger displays and eccentric targets are perceptual, not motor. 
Slope dependence on AQ
The difference in face detection is accentuated when looking at the dependence of face detection slope on AQ score, as shown in Figure 3 (top). The two groups of participants are neatly separated in the graph, as expected, since this is the criterion for their being in the special-needs school and for their being chosen for the current study. For the two groups together, the dependence is quite strong and positive (0.26 [ms/item]/AQ score). Nevertheless, for the ASD group alone, there is a negative dependence on AQ (−0.47 [ms/item]/AQ score), and for the NT, there is little dependence (0.048 [ms/item]/AQ score). 
Figure 3.
 
Dependence of face detection set size slope on AQ score (top) and on IQ score (middle) for the NT group (blue) and ASD group (orange). Bottom: relationship between IQ and AQ scale scores, for all study participants.
Figure 3.
 
Dependence of face detection set size slope on AQ score (top) and on IQ score (middle) for the NT group (blue) and ASD group (orange). Bottom: relationship between IQ and AQ scale scores, for all study participants.
Figure 3 (middle) plots the dependence of face detection slope on IQ for the ASD (blue) and NT (orange) groups. Note that the data points in Figure 3 (middle) for the two participant groups are scattered across the graph, reflecting their matched IQ scores. In contrast to the dependence on AQ (Figure 3, top), here there is little dependence of search slope on IQ (Figure 3, middle; −0.5 [ms/item]/IQ score). The slight decline with IQ might reflect the slight negative dependence in our particular groups of participants of IQ versus AQ (Figure 3, bottom; −0.05 IQ score/AQ score). 
Experiment 2
Task and experimental design
The experiment was divided into four blocks of 40 trials each, with different target images (car, house, lion face, or dog face). As in Experiment 1, array sizes were 2 × 2, 4 × 4, 6 × 6, and 8 × 8 images, always with the same image size and same interitem spacing, always clustered around the center of the screen, where a fixation cross appeared before each trial. Targets were randomly placed in the search array and images were chosen from the Google image database. Figure 1 (bottom) shows examples of the stimulus displays, with different numbers of images and different targets. 
The order of presentation of the stimuli was again in increasing size order, going from 2 × 2 to 8 × 8, and again following five training trials, displaying a single centered image of the target category and asking the child to touch the image. This ensured that the child understood the task and the nature of the target category. 
If the target was not found within 30 seconds, the trial was aborted and excluded from the analysis. If an incorrect picture was touched, we allowed another 20 seconds for a second touch on the correct picture; otherwise, the response was deemed incorrect. The occurrence of incorrect trials was less than one/thousand and, for double touches, less than 2%. 
Results
RTs for the two groups and for visual search for different targets are plotted in Figure 4 as a function of set size—the number of pictures on the screen (from 4 to 64; Figure 4, top: NT; center: ASD). RTs are longer the greater the set size for all target types. 
Figure 4.
 
Response time versus set size for each target category and for the two participant groups: top: NT, middle: ASD. Bottom: Comparison of the slopes of the two groups’ slopes: NT: abscissa; ASD: ordinate. Note that all points are above the line of equality.
Figure 4.
 
Response time versus set size for each target category and for the two participant groups: top: NT, middle: ASD. Bottom: Comparison of the slopes of the two groups’ slopes: NT: abscissa; ASD: ordinate. Note that all points are above the line of equality.
Table 3 (top two rows) presents RTs for the children with ASDs and those with NT. We found a significant difference between the two groups in their average detection speed, measured as the time between array presentation and their successfully touching the target. Children in the ASD group were consistently slower in detecting the target, averaging 2,532 ms versus 2,054 ms for the NT group (t test, p < 0.001; see Table 3, top two rows). 
A 2 × 4 mixed ANOVA of RTs with group as the between-subject variable and set size in search tasks as the within-subject variable—separately for dog face, lion face, house, and car—revealed significant main effects of both group (dog: F(1, 44) = 20.6, p < 0.001; lion: F(1, 44) = 11.9, p < 0.002; car: F(1, 44) = 5.9, p < 0.02), except for house (F(1, 44) = 3.6, p = 0.064) and image set size (dog: F(3, 132) = 142.3, p < 0.001; lion: F(3, 132) = 171.7, p < 0.001; car: F(3, 132) = 158.4, p < 0.001; house: F(3, 132) = 141.3, p < 0.001). There is a significant interaction effect of Group × Set Size only for dog face (F(3, 132) = 3.9, p < 0.02), as there was for human face (Experiment 1), but not for other targets (lion: F(3, 132) = 1.9, p = 0.13; car: F(3, 132) = 0.5, p = 0.67; house: F(3, 132) = 1.5, p = 0.19). The interaction for two face targets (human, dog) reflects the significantly larger dependence on set size for the ASD group or, equivalently, the larger between-group RT difference for larger set size. The lack of interaction for nonface targets reflects the nonsignificant difference in set size slopes for these targets. 
A linear trendline is drawn for each data set in Figure 4 (top and middle) and its parameters (intercept and slope) presented in Table 3 (middle 2 and bottom 2 rows, respectively). The intercepts are significantly different—again reflecting the slower responses of the ASD group—with averages of 851 ± 46 ms versus 672 ± 36 ms for the ASD and NT groups (t test, p < 0.01). In addition, the RT set size slopes depend on search target, with the slopes of search for dog and lion faces being closer to that of human faces and those of search for cars, and especially houses, considerably larger. 
Figure 4 (bottom) displays the search size slopes for different targets for the ASD group as a function of the slopes for the NT group. The points are all above the line of equality, signifying that search slopes for the ASD group are greater than those for the NT group. 
A 2 × 5 mixed ANOVA of search slopes with group as the between-subject variable and target item category as the within-subject variable revealed significant main effects of both group (F(1, 44) = 7.3, p < 0.01) and target item category (F(5, 220) = 67.3, p < 0.001), but no significant interaction effect of Group × Target Item category was found (F(5, 220) = 1.82, p = 0.1). The significant dependence on group reflects the finding that in all five categories, the set size slopes were larger for the ASD groups than the NT group, as shown in Figure 4 (bottom). The significant dependence on target reflects the finding that, looking at the slopes for specific targets, significant differences were found between the slopes for different targets, perhaps reflecting different levels of expertise of the children with different object categories (Hershler & Hochstein, 2009). The slopes fall into four groups: faces: shallowest slopes (<20 ms/item), lion and dog faces (20–35 ms/item), cars (65–80 ms/item), and houses (>100 ms/item). Presumably, children of both groups are more expert at recognizing dog and lion faces than cars and have more expertise with detecting cars than houses. Significantly, all children are best at detecting human faces. 
Taken as a group, there is a consistent but only small difference in set size slope for the gamut of search targets, with the average in the ASD group being 58 ± 5 ms/item (68 without faces) versus 48 ± 4 ms/item (57) for the NT group (t test, p < 0.01). The average slopes, without faces, of both groups suggest a general slow, serial-like search, similar to Treisman's focused attention or conjunction search. The first conclusion is thus that the ASD group is not faster at category search than the NT group, as suggested by some previous studies (see Introduction). 
There are, however, selective differences for specific targets, as follows. There are no significant differences between the slopes of the two groups for cars (77 ± 8 ms/item vs. 71 ± 6 ms/item; t test, p = 0.26), houses (109 ± 9 ms/item vs. 133 ± 15 ms/item; t test, p = 0.09), and lion faces (25 ± 2 ms/item vs. 31 ± 3 ms/item; t test, p = 0.06), and a small but significant effect was found for the search slopes of the two groups for dog faces (22 ± 2 ms/item vs. 29 ± 3 ms/item; t test, p < 0.02). 
Slope dependence on AQ
In Figure 5, we compare the set size dependencies on AQ for the different target categories (Figure 5, left, displays all results on the same scale for comparison; Figure 5, right, displays enlarged scales for better viewing of the AQ dependencies). The largest set size slope is found for the house category, and here is the largest AQ dependence, as well. This is followed by the car category, and then by lion and dog faces, with quite low set size dependences, and little dependence on AQ. Finally, the smallest set size slopes are for human faces, where the dependence on AQ, which is evident in Figure 3 (top) and Figure 5 (right), cannot be seen on the scale of Figure 5 (left) (see also Table 3 and Figure 4). 
Figure 5.
 
Set size slope dependence on AQ score for different targets, comparing the NT (blue) and ASD (orange) participants. All graphs on the right are on the same y-axis scale to allow direct comparison; on the left, on larger scales to visualize slopes.
Figure 5.
 
Set size slope dependence on AQ score for different targets, comparing the NT (blue) and ASD (orange) participants. All graphs on the right are on the same y-axis scale to allow direct comparison; on the left, on larger scales to visualize slopes.
We do not have an explanation for the seemingly negative slopes with AQ found in some cases, when looking at the two groups separately. The strong negative slopes for human face search in the ASD group are mainly due to the impact of three outlier individuals with large slopes and small AQ, and a smaller effect for lion faces is due to one such individual. 
Summary and discussion
We tested visual search for high-level photograph categories in children with ASDs and NT developed peers. As discussed in the Introduction, deficits in face recognition have been implicated as at the core of social impairments of people with ASDs, and the DSM-5 diagnostic criteria for ASDs include items related to “deficits in nonverbal communicative behaviors used for social interaction, … from … eye contact … to … facial expressions” (American Psychiatric Association, 2013). Thus, we were especially interested in the abilities of children with ASDs in the visual search for faces. 
In Experiment 1, we found that the face detection deficit, if any, was minor, as follows. Faces are detected faster than any other category for both participant groups. In addition, the dependence of search on the number of photographs to be searched, the set size dependence, was by far the shallowest for human face targets, for both participant groups, when comparing Experiments 1 and 2. There was a small, although significant, difference between the participant groups in their set size slopes for face search, and this difference would classically be accounted for by assuming a more feature-like parallel search for NT children, and more difficult, serial-like search for ASD children. This result would be supported by a significant trend of increased set size slope with increased AQ score, although this difference depended largely on the above difference between the groups, rather than a within-group dependence. 
An anonymous reviewer queried if there is substantial perceptual learning during performance of the task. Learning would tend to improve performance for later trials, which in our design had larger set sizes, leading to reduced set size slopes. If such learning existed and was less for the ASD group, it could account for the difference in face detection slopes and suggest an interesting, although different, difference between groups. We compared performance for each target category for the first three and last three trials of the same test array size. There was no difference in change between these sets of trials for the two groups (p = .24), and performance for the last trials was actually slower—the opposite of a learning effect—eliminating this alternative interpretation. 
Is the set size slope difference sufficient to exclude ASD face detection from the fast parallel search category or does it only suggest more difficult pop-out? In fact, the various versions of Guided Search have suggested a continuum between serial and parallel search, so that we need not make a categorical judgment at all (Wolfe, 2021; Wolfe, Cave, & Franzel, 1989). There are two reasons to still judge face detection of the ASD group as closer to parallel search. First of all, there is a great difference between the slopes even of the ASD group for face targets versus other targets, as seen in Figure 4. Interestingly, lion and dog faces are in between human faces and the other categories. Furthermore, the search display images were always presented as close to fixation as allowed by the number of images, so that the 2 × 2 trials had all images with foveal presentation, and the other size displays included images, and usually also the target image, in the periphery. Leaving out the 2 × 2 points, the slopes are considerably shallower, ASD: 14.2 ms/item; NT: 9.7 ms/item. The exceptional nature of the 2 × 2 point is supported by its being below the trendline of the four data points and the ASD plot without the 2 × 2 point (dashed lines in Figure 2) being statistically more significant, with the Pearson R of 0.96, p = 0.03, going to the Pearson R of 1.0, p = 0.005, supporting exclusion of the 2 × 2 point. Thus, considering this and the great difference between the slopes even of the ASD group for face targets versus other targets, it is more parsimonious to conclude that face detection for ASD children, too, is basically closer to a parallel visual search phenomenon. This conclusion is consistent with the finding that children with autism attend rapidly to faces (Fischer et al., 2014) and that adolescents with ASD are able to perceive Mooney faces, suggesting that their holistic face detection mechanisms are intact (Naumann et al., 2018) and only perhaps quantitatively different from those of NT individuals. 
In summary, the fact that the difference between groups is marginal precludes any determination that children with ASDs detect faces in a categorically degraded fashion. Rather, we must conclude that autism-related deficits in face perception (Dawson et al., 2005; Tanaka & Sung, 2016; Weigelt, Koldewyn, & Kanwisher, 2012) are in recognition of face identity and/or emotion, rather than in face detection. This conclusion has important implications for the source and magnitude of face-related deficits in ASD children. 
We also tested visual search for other categories in Experiment 2, including dog and lion faces, cars, and houses. In all cases, children with ASDs were slower than NT children. The dependence of search set size slope on category was dramatic and similar in the two groups, going from steepest for houses (109–133 ms/item), followed by cars (70–77 ms/item), and then lion faces (25–30 ms/item), dog faces (21–29 ms/item), and finally, as mentioned, human faces (11.3–18.8 ms/item). Thus, children with ASDs are somewhat slower than NT children, but the general set size slope dependence on target is maintained and consistent in detail. We note that visual search was not faster for the ASD group, for any of these high-level categories, as had been suggested by some studies for conjunctive visual search (Plaisted, O'Riordan, & Baron-Cohen, 1998). 
We are testing visual search with the target being a superordinate category (e.g., animals), with large within-category variability. We find larger between-group differences, as will be discussed in a forthcoming paper, where we analyze ASDs and categorization (see Alderson-Day & McGonigle-Chambers, 2011; Gastgeb, Strauss, & Minshew, 2006; Naigles et al., 2013; Shulman, Yirmiya, & Greenbaum, 1995). 
RHT differentiates between rapid vision at a glance providing gist perception and slower vision with scrutiny allowing perception of scene details. It was found that visual perception deficits of neglect syndrome are related mainly to scene detail perception and that gist perception is largely spared (Hochstein et al., 2015; Pavlovskaya et al., 2002, 2005). It has also been found that rapid feature search is less affected by ASDs than conjunction search (Keehn & Joseph, 2016). Similarly, there are indications that ensemble perception, another high-level global perceptual mechanism, might be spared in ASDs (Corbett et al., 2016; Karaminis et al., 2017; Lowe et al., 2018; Maule et al., 2017; Rhodes et al., 2014; Sweeny et al., 2015; van der Hallen et al., 2017). RHT claims that rapid feature search is part of vision at a glance gist perception and, as such, is related to high cortical-level representations. Thus, rapid search includes search for a face—a built-in category (Hershler & Hochstein, 2005, 2006). The conclusion is that face detection is rapid and, consistent with RHT, less affected by ASD. 
Acknowledgments
The authors thank the participants of all ages and of the two groups, as well as their parents and teachers, for facilitating these tests. We thank the Ministry of Education Chief Scientist Office and the school principal for granting permission for this study. Thanks to Yuri Maximov for assistance with programming and data analysis. 
Supported by a grant from the Israel Science Foundation. 
Dedicated to the memory of Lily Safra, a great supporter of brain research. 
Commercial relationships: none. 
Corresponding author: Shaul Hochstein. 
Email: shaulhochstein@gmail.com. 
Address: Hebrew University, Jerusalem, ELSC & Life Sciences Institute, Safra Campus, Jerusalem Israel. 
References
Abassi Abu Rukab, S., Khayat, N., & Hochstein, S. (2021). High level feature search in autism spectrum disorder. Journal of Vision, 21, I14, https://doi.org/10.1167/jov.21.9.2499. [CrossRef]
Adler, S. A., & Orprecio, J. (2006). The eyes have it: visual pop-out in infants and adults. Developmental Science, 9(2), 189–206, https://doi.org/10.1111/j.1467-7687.2006.00479. [CrossRef]
Ahissar, M., & Hochstein, S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387(6631), 401–406. [CrossRef]
Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Science, 8(10), 457–464. [CrossRef]
Alderson-Day, B., & McGonigle-Chambers, M. (2011). Is it a bird? Is it a plane? Category use in problem-solving in children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 41, 555–565. [CrossRef]
Alexander, R. G., & Zelinsky, G. J. (2011). Visual similarity effects in categorical search. Journal of Vision, 11, 1–15, https://doi.org/10.1167/11.8.9. [CrossRef]
American Psychiatric Association. (2013). Autism spectrum disorder. In Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author, https://doi.org/10.1176/appi.books.9780890425596.
Baron-Cohen, S. (1995). Mindblindness. Cambridge: MIT Press.
Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8(6), 551–565, https://doi.org/10.1162/jocn.1996.8.6.551. [CrossRef]
Booth, R., & Happé, F. (2018). Evidence of reduced global processing in autism spectrum disorder. Journal of Autism and Developmental Disorders, 48(4), 1397–1408, https://doi.org/10.1007/s10803-016-2724-6. [CrossRef]
Bott, L., Brock, J., Brockdorff, N., Boucher, J., & Lamberts, K. (2006). Perceptual similarity in autism. The Quarterly Journal of Experimental Psychology, 59(7), 1237–1254, https://doi.org/10.1080/02724980543000196. [CrossRef]
Burns, C. O., Dixon, D. R., Novack, M., & Granpeesheh, D. (2017). A systematic review of assessments for sensory processing abnormalities in autism spectrum disorder. Review Journal of Autism and Developmental Disorders, 4(3), 209–224, https://doi.org/10.1007/s40489-017-0109-1. [CrossRef]
Caharel, S., Courtay, N., Bernard, C., Lalonde, R., & Rebaï, M. (2005). Familiarity and emotional expression influence an early stage of face processing: An electrophysiological study. Brain and Cognition, 59(1), 96–100, https://doi.org/10.1016/j.bandc.2005.05.005. [CrossRef]
Corbett, J. E., Venuti, P., & Melcher, D. (2016). Perceptual averaging in individuals with autism spectrum disorder. Frontiers in Psychology, 7, 1735, https://doi.org/10.3389/fpsyg.2016.01735. [CrossRef]
Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10, 16.1–17, https://doi.org/10.1167/10.4.16. [CrossRef]
Dawson, G., Webb, S. J., & McPartland, J. (2005). Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies. Developmental Neuropsychology, 27, 403–424. [CrossRef]
Day, M. C. (1978). Visual search by children: The effect of background variation and the use of visual cues. Journal of Experimental Child Psychology, 25, l–16. [CrossRef]
Dellapiazza, F., Vernhet, C., Blanc, N., Miot, S., Schmidt, R., & Baghdadli, A. (2018). Links between sensory processing, adaptive behaviours, and attention in children with autism spectrum disorder: A systematic review. Psychiatry Research, 270, 78–88, https://doi.org/10.1016/j.psychres.2018.09.023. [CrossRef]
Di Giorgio, E., Turati, C., Altoè, G., & Simion, F. (2012). Face detection in complex visual displays: An eye-tracking study with 3- and 6-month-old infants and adults. Journal of Experimental Child Psychology, 113(1), 66–77, https://doi.org/10.1016/j.jecp.2012.04.012. [CrossRef]
Doherty, B. R., Charman, T., Johnson, M. H., Scerif, G., Gliga, T., & BASIS Team. (2018). Visual search and autism symptoms: What young children search for and co-occurring ADHD matter. Developmental Science, 21(5), e12661, https://doi.org/10.1111/desc.12661. [CrossRef]
Donnelly, N., Cave, K., Greenway, R., Hadwin, J. A., Stevenson, J., & Sonuga-Barke, E. (2007). Visual search in children and adults: Top-down and bottom-up mechanisms. The Quarterly Journal of Experimental Psychology, 60(1), 120–136. [CrossRef]
DuBois, D., Lymer, E., Gibson, B. E., Desarkar, P., & Nalder, E. (2017). Assessing sensory processing dysfunction in adults and adolescents with autism spectrum disorder: A scoping review. Brain Sciences, 7(8), 108, https://doi.org/10.3390/brainsci7080108.
Egeth, H., Jonides, J., & Wall, S. (1972). Parallel processing of multielement displays. Cognitive Psychology, 3(4), 674–698, https://doi.org/10.1016/0010-0285(72)90026-6. [CrossRef]
Eimer, M. (2000). Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology, 111(4), 694–705, https://doi.org/10.1016/s1388-2457(99)00285-0. [CrossRef]
Elsabbagh, M., Gliga, T., Pickles, A., Hudry, K., Charman, T., Johnson, M. H., & Team, BASIS (2013). The development of face orienting mechanisms in infants at-risk for autism. Behavioural Brain Research, 251, 147–154, https://doi.org/10.1016/j.bbr.2012.07.030. [CrossRef]
Fischer, J., Koldewyn, K., Jiang, Y. V., & Kanwisher, N. (2014). Unimpaired attentional disengagement and social orienting in children with autism. Clinical Psychological Science, 2(2), 214–223, https://doi.org/10.1177/2167702613496242. [CrossRef]
Frith, U. (2001). Mind blindness and the brain in autism. Neuron, 32, 969–979. [CrossRef]
Frith, U., & Happé, F. (1994). Autism: Beyond “theory of mind”. Cognition, 50(1–3), 115–132.
Gadgil, M., Peterson, E., Tregellas, J., Hepburn, S., & Rojas, D. C. (2013). Differences in global and local level information processing in autism: An fMRI investigation. Psychiatry Research: Neuroimaging, 213(2), 115–121. [CrossRef]
Gastgeb, H., Strauss, M., & Minshew, N. (2006). Do individuals with autism process categories differently? The effect of typicality and development. Child Development, 77, 1717–1729. [CrossRef]
Gerhardstein, P., & Rovee-Collier, C. (2002). The development of visual search in infants and very young children. Journal of Experimental Child Psychology, 81(2), 194–215. [CrossRef]
Gliga, T., Bedford, R., Charman, T., Johnson, M. H., & The BASIS Team. (2015). Enhanced visual search in infancy predicts emerging autism symptoms. Current Biology, 25, 1727–1730, https://doi.org/10.1016/j.cub.2015.05.01. [CrossRef]
Gliga, T., Elsabbagh, M., Andravizou, A., & Johnson, M. H. (2009). Faces attract infants’ attention in complex displays. Infancy, 14, 550–562. [CrossRef]
Golarai, G., Grill-Spector, K., & Reiss, A. L. (2006). Autism and the development of face processing. Clinical Neuroscience Research, 6, 145–160. [CrossRef]
Goren, C. C., Sarty, M., & Wu, P. Y. (1975). Visual following and pattern discrimination of face-like stimuli by newborn infants. Pediatrics, 56(4), 544–549. [CrossRef]
Happé, F. G., & Booth, R. D. (2008). The power of the positive: Revisiting weak coherence in autism spectrum disorders. Quarterly Journal of Experimental Psychology (2006), 61(1), 50–63, https://doi.org/10.1080/17470210701508731. [CrossRef]
Happé, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5–25. [CrossRef]
Harms, M. B., Martin, A., & Wallace, G. L. (2010). Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies. Neuropsychological Review, 20, 290–322.
Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45, 1707–1724.
Hershler, O., & Hochstein, S. (2006). With a careful look: Still no low-level confound to face pop out. Vision Research, 46, 3028–3035.
Hershler, O., & Hochstein, S. (2009). The importance of being expert: Top-down attentional control in visual search with photographs. Attention, Perception, & Psychophysics, 71(7), 1478–1486, https://doi.org/10.3758/APP.71.7.1478.
Herzmann, G., Schweinberger, S. R., Sommer, W., & Jentzsch, I. (2004). What's special about personally familiar faces? A multimodal approach. Psychophysiology, 41(5), 688–701, https://doi.org/10.1111/j.1469-8986.2004.00196.x.
Hochstein, S. (2020). The gist of Anne Treisman's revolution. Attention, Perception, & Psychophysics, 82(1), 24–30, https://doi.org/10.3758/s13414-019-01797-2.
Hochstein, S., & Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36(5), 791–804.
Hochstein, S., Pavlovskaya, M., Bonneh, Y. S., & Soroker, N. (2015). Global statistics are not neglected. Journal of Vision, 15, 7, https://doi.org/10.1167/15.4.7.
Hommel, B., Li, K. Z. H., & Li, S.-C. (2004). Visual search across the life span. Developmental Psychology, 40, 545–558.
Jemel, B., Mottron, L., & Dawson, M. (2006). Impaired face processing in autism: Fact or artifact? Journal of Autism and Developmental Disorders, 36, 91–106.
Jemel, B., Schuller, A. M., & Goffaux, V. (2010). Characterizing the spatio-temporal dynamics of the neural events occurring prior to and up to overt recognition of famous faces. Journal of Cognitive Neuroscience, 22(10), 2289–2305, https://doi.org/10.1162/jocn.2009.21320.
Joseph, R. M., Keehn, B., Connolly, C., Wolfe, J. M., & Horowitz, T. S. (2009). Why is visual search superior in autism spectrum disorder? Developmental Science, 12(6), 1083–1096.
Kaldy, Z., Giserman, I., Carter, A. S., & Blaser, E. (2016). The mechanisms underlying the ASD advantage in visual search. Journal of Autism and Developmental Disorders, 46(5), 1513–1527.
Karaminis, T., Neil, L., Manning, C., Turi, M., Fiorentini, C., Burr, D., & Pellicano, E. (2017). Ensemble perception of emotions in children with autism is similar to typically developing children. Developmental Cognitive Neuroscience, 24, 51–62.
Keehn, B., & Joseph, R. M. (2016). Slowed search in the context of unpaired grouping in autism: Evidence from multiple conjunction search. Autism Research, 9(3), 333–339, https://doi.org/10.1002/aur.1534.
Koldewyn, K., Jiang, Y. V., Weigelt, S., & Kanwisher, N. (2013). Global/local processing in autism: Not a disability, but a disinclination. Journal of Autism and Developmental Disorders, 43(10), 2329–2340.
Leopold, D. A., & Rhodes, G. (2010). A comparative view of face perception. Journal of Comparative Psychology, 124(3), 233–251, https://doi.org/10.1037/a00119460.
Lieder, I., Adam, V., Frenkel, O., Jaffe-Dax, S., Sahani, M., & Ahissar, M. (2019). Perceptual bias reveals slow-updating in autism and fast-forgetting in dyslexia. Nature Neuroscience, 22(2), 256–264.
Lindor, E., Rinehart, N., & Fielding, J. (2018). Superior visual search and crowding abilities are not characteristic of all individuals on the autism spectrum. Journal of Autism and Developmental Disorders, 48(10), 3499–3512, https://doi.org/10.1007/s10803-018-3601-2.
Lowe, M. X., Stevenson, R. A., Barense, M. D., & Cant, J. S., & Ferber, S. (2018). Relating the perception of visual ensemble statistics to individual levels of autistic traits. Attention, Perception, & Psychophysics, 80, 1667–1674, https://doi.org/10.3758/s13414-018-1580-1.
Marciano, H., Gal, E., Kimchi, R., Hedley, D., Goldfarb, Y., & Bonneh, Y. S. (2021). Visual detection and decoding skills of aerial photography by adults with autism spectrum disorder (ASD). Journal of Autism and Developmental Disorders, 52(3), 1346–1360, https://doi/org/10.1007/s10803-021-05039-z.
Marcus, D. J., & Nelson, C. A. (2001). Neural bases and development of face recognition in autism. CNS Spectroscopy, 6, 36–59.
Maule, J., Stanworth, K., Pellicano, E., & Franklin, A. (2017). Ensemble perception of color in autistic adults. Autism Research, 10(5), 839–851, https://doi.org/10.1002/aur.1725.
McPartland, J., Dawson, G., Webb, S. J., Panagiotides, H., & Carver, L. J. (2004). Event-related brain potentials reveal anomalies in temporal processing of faces in autism spectrum disorder. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 45(7), 1235–1245, https://doi.org/10.1111/j.1469-7610.2004.00318.x.
Merrill, E. C., & Conners, F. A. (2013). Age-related interference from irrelevant distracters in visual feature search among heterogeneous distracters. Journal of Experimental Child Psychology, 115(4), 640–654, https://doi.org/10.1016/j.jecp.2013.03.013.
Michael, G. A., Lété, B., & Ducrot, S. (2013). Trajectories of attentional development: An exploration with the master activation map model. Developmental Psychology, 49(4), 615.
Minio-Paluello, I., Porciello, G., Pascual-Leone, A., & Baron-Cohen, S. (2020). Face individual identity recognition: A potential endophenotype in autism. Molecular Autism, 11, 81, https://doi.org/10.1186/s13229-020-00371-0.
Naigles, L. R.,, Kelley, E., Troyb, E., & Fein, D. (2013). Residual difficulties with categorical induction in children with a history of autism. Journal of Autism and Developmental Disorders, 43(9), 2048–2061, https://doi.org/10.1007/s10803-012-1754-y.
Naumann, S., Senftleben, U., Santhosh, M., McPartland, J., & Webb, S. J. (2018). Neurophysiological correlates of holistic face processing in adolescents with and without autism spectrum disorder. Journal of Neurodevelopmental Disorders, 10(1), 27.
O'Connor, K., Hamm, J. P., & Kirk, I. J. (2007). Neurophysiological responses to face, facial regions and objects in adults with Asperger's syndrome: An ERP investigation. International Journal of Psychophysiology, 63(3), 283–293.
O'Riordan, M., & Plaisted, K. (2001). Enhanced discrimination in autism. The Quarterly Journal of Experimental Psychology: Section A, 54(4), 961–979.
O'Riordan, M. A., Plaisted, K. C., Driver, J., & Baron-Cohen, S. (2001). Superior visual search in autism. Journal of Experimental Psychology: Human Perception and Performance, 27(3), 719–730, https://doi.org/10.1037/0096-1523.27.3.719.
Pavlovskaya, M., Ring, H., Groswasser, Z., & Hochstein, S. (2002). Searching with unilateral neglect. Journal of Cognitive Neuroscience, 14, 745–756.
Pavlovskaya, M., Soroker, N., Bonneh, Y. S., & Hochstein, S. (2005). Computing an average when part of the population is not perceived. Journal of Cognitive Neuroscience, 27, 1397–1411.
Pérez, D. L., Kennedy, D. P., Tomalski, P., Bölte, S., D'Onofrio, B., & Falck-Ytter, T. (2019). Visual search performance does not relate to autistic traits in the general population. Journal of Autism and Developmental Disorders, 49(6), 2624–2631.‏
Pierce, K., & Courchesne, E. (2000). Exploring the neurofunctional organization of face processing in autism. Archives General Psychiatry, 57, 344–346.
Plaisted, K., O'Riordan, M., & Baron-Cohen, S. (1998). Enhanced visual search for a conjunctive target in autism: A research note. The Journal of Child Psychology and Psychiatry and Allied Disciplines, 39(5), 777–783, https://doi.org/10.1017/S0021963098002613.
Rhodes, G., Neumann, M. F., Ewing, L., & Palermo, R. (2014). Reduced set averaging of face identity in children and adolescents with autism. Quarterly Journal of Experimental Psychology, 68, 1394–1403.
Sasson, N. J. (2006). The development of face processing in autism. Journal of Autism Developmental Disorders, 36, 381–394.
Schultz, R. T. (2005). Developmental deficits in social perception in autism: The role of the amygdala and fusiform face area. International Journal of Developmental Neuroscience, 23, 125–141.
Shulman, C., Yirmiya, N., & Greenbaum, C. (1995). From categorization to classification: A comparison among individuals with autism, mental retardation, and normal development. Journal of Abnormal Psychology, 104(4), 601–609.
Simmons, D. R., Robertson, A. E., McKay, L. S., Toal, E., McAleer, P., & Pollick, F. E. (2009). Vision in autism spectrum disorders. Vision Research, 49(22), 2705–2739, https://doi.org/10.1016/j.visres.2009.08.005.
Simpson, E. A., Maylott, S. E., Leonard, K., Lazo, R. J., & Jakobsen, K. V. (2019). Face detection in infants and adults: Effects of orientation and color. Journal of Experimental Child Psychology, 186, 17–32.
Stevenson, R. A., Philipp-Muller, A., Hazlett, N., Wang, Z. Y., Luk, J., Lee, J., . . . Barense, M. D. (2019). Conjunctive visual processing appears abnormal in autism. Frontiers in Psychology, 9, 2668, https://doi.org/10.3389/fpsyg.2018.02668.
Sweeny, T., Wurnitsch, N., Gopnik, A., & Whitney, D. (2015). Ensemble perception of size in 4-5-year-old children. Developmental Science, 18(4), 556–568.
Tanaka, J. W., Curran, T., Porterfield, A. L., & Collins, D. (2006). Activation of preexisting and acquired face representations: The N250 event-related potential as an index of face familiarity. Journal of Cognitive Neuroscience, 18(9), 1488–1497, https://doi.org/10.1162/jocn.2006.18.9.1488.
Tanaka, J. W., & Sung, A. (2016). The “eye avoidance” hypothesis of autism face processing. Journal of Autism and Developmental Disorders, 46(5), 1538–1552, https://doi.org/10.1007/s10803-013-1976-7.
Thompson, L. A., & Massaro, A. W. (1989). Before you see it, you see its parts: Evidence for feature encoding and integration in preschool children and adults. Cognitive Psychology, 21, 334–362.
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.
Treisman, A., & Souther, J. (1985). Search asymmetry: A diagnostic for preattentive processing of separable features. Journal of Experimental Psychology: General, 114(3), 285–310.
Trick, L. M., & Enns, J. T. (1998). Life span changes in attention: The visual search task. Cognitive Development, 13(3), 369–386.
Van der Hallen, R., Lemmens, L., Steyaert, J., Noens, I., & Wagemans, J. (2017). Ensemble perception in autism spectrum disorder: Member identification versus mean-discrimination. Autism Research, 10(7), 1291–1299, https://doi.org/10.1002/aur.1767.
Visconti di Oleggio Castello, M., & Gobbini, M. I. (2015). Familiar face detection in 180ms. PLoS ONE, 10(8), e0136548, https://doi.org/10.1371/journal.pone.0136548.
Webb, S. J., Jones, E. J., Merkle, K., Murias, M., Greenson, J., Richards, T., . . ., , & Dawson, G. (2010). Response to familiar faces, newly familiar faces, and novel faces as assessed by ERPs is intact in adults with autism spectrum disorders. International Journal of Psychophysiology, 77(2), 106–117, https://doi.org/10.1016/j.ijpsych.2010.04.011.
Wechsler, D. A. (1997). Wechsler Adult Intelligence Scale—Third Edition (WAIS–III). San Antonio, TX: Psychological Corporation.
Wechsler, D. A. (2003). Wechsler Intelligence Scale for Children —Fourth Edition (WISC-IV). San Antonio, TX: Psychological Corporation.
Weigelt, S., Koldewyn, K., & Kanwisher, N. (2012). Face identity recognition in autism spectrum disorders: A review of behavioral studies. Neuroscience and Biobehavioral Reviews, 36(3), 1060–1084, https://doi.org/10.1016/j.neubiorev.2011.12.008.
Wolfe, J. M. (1998). What can 1,000,000 trials tell us about visual search? Psychological Science, 9(1), 33–39.
Wolfe, J. M. (2018). Visual search. In Wixted, J. (Ed.), Stevens’ handbook of experimental psychology and cognitive neuroscience: Vol. 2. Sensation, perception & attention, Chapter 13 (pp. 1–55). Hoboken, NJ: Wiley.
Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 28(4), 1060–1092.
Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided Search: An alternative to the Feature Integration model for visual search. Journal of Experimental Psychology—Human Perception and Performance, 15, 419–433.
Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 0058.
Woods, A. J., Göksun, T., Chatterjee, A., Zelonis, S., Mehta, A., & Smith, S. E. (2013). The development of organized visual search. Acta Psychologica, 143(2), 191–199.
Wu, R., Scerif, G., Aslin, R. N., Smith, T. J., Nako, R., & Eimer, M. (2013). Searching for something familiar or novel: Top-down attentional selection of specific items or object categories. Journal of Cognitive Neuroscience, 25(5), 719–729, https://doi.org/10.1162/jocn_a_00352.
Yang, H., & Zelinsky, G. J. (2009). Visual search is guided to categorically defined targets. Vision Research, 49, 2095–2103.
Figure 1.
 
Examples of displays.
Figure 1.
 
Examples of displays.
Figure 2.
 
Response times of participants as a function of image set sizes showing group slopes in the human face visual search task for ASD and NT children groups. Each circle represents average response time (RT) of a single participant; horizontal lines indicate the group average RT; error bars represent standard error of the mean; solid line trendlines represent slopes of all data; dashed line trendlines represent slope of data from set sizes 16 to 64 (but displayed from set size 4). Data are shifted horizontally to facilitate visualization.
Figure 2.
 
Response times of participants as a function of image set sizes showing group slopes in the human face visual search task for ASD and NT children groups. Each circle represents average response time (RT) of a single participant; horizontal lines indicate the group average RT; error bars represent standard error of the mean; solid line trendlines represent slopes of all data; dashed line trendlines represent slope of data from set sizes 16 to 64 (but displayed from set size 4). Data are shifted horizontally to facilitate visualization.
Figure 3.
 
Dependence of face detection set size slope on AQ score (top) and on IQ score (middle) for the NT group (blue) and ASD group (orange). Bottom: relationship between IQ and AQ scale scores, for all study participants.
Figure 3.
 
Dependence of face detection set size slope on AQ score (top) and on IQ score (middle) for the NT group (blue) and ASD group (orange). Bottom: relationship between IQ and AQ scale scores, for all study participants.
Figure 4.
 
Response time versus set size for each target category and for the two participant groups: top: NT, middle: ASD. Bottom: Comparison of the slopes of the two groups’ slopes: NT: abscissa; ASD: ordinate. Note that all points are above the line of equality.
Figure 4.
 
Response time versus set size for each target category and for the two participant groups: top: NT, middle: ASD. Bottom: Comparison of the slopes of the two groups’ slopes: NT: abscissa; ASD: ordinate. Note that all points are above the line of equality.
Figure 5.
 
Set size slope dependence on AQ score for different targets, comparing the NT (blue) and ASD (orange) participants. All graphs on the right are on the same y-axis scale to allow direct comparison; on the left, on larger scales to visualize slopes.
Figure 5.
 
Set size slope dependence on AQ score for different targets, comparing the NT (blue) and ASD (orange) participants. All graphs on the right are on the same y-axis scale to allow direct comparison; on the left, on larger scales to visualize slopes.
Table 1.
 
Data for neurotypical children. Averaged data include standard deviations.
Table 1.
 
Data for neurotypical children. Averaged data include standard deviations.
Table 2.
 
Data for children with ASD. Averaged data include standard deviations.
Table 2.
 
Data for children with ASD. Averaged data include standard deviations.
Table 3.
 
Response time (RT) and set size dependence for the ASD and NT groups, for different target categories. RT is fastest and set slope lowest for face targets, for both participant groups. All values are averages and standard errors.
Table 3.
 
Response time (RT) and set size dependence for the ASD and NT groups, for different target categories. RT is fastest and set slope lowest for face targets, for both participant groups. All values are averages and standard errors.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×