Open Access
Article  |   October 2019
Category selectivity for animals and man-made objects: Beyond low- and mid-level visual features
Author Affiliations
  • Chenxi He
    Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
  • Olivia S. Cheung
    Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
    olivia.cheung@nyu.edu
Journal of Vision October 2019, Vol.19, 22. doi:https://doi.org/10.1167/19.12.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chenxi He, Olivia S. Cheung; Category selectivity for animals and man-made objects: Beyond low- and mid-level visual features. Journal of Vision 2019;19(12):22. doi: https://doi.org/10.1167/19.12.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Distinct concepts, such as animate and inanimate entities, comprise vast and systematic differences in visual features. While observers search for animals faster than man-made objects, it remains unclear to what extent visual or conceptual information contributes to such differences in visual search performance. Previous studies demonstrated that visual features are likely sufficient for distinguishing animals from man-made objects. Across four experiments, we examined whether low- or mid-level visual features solely contribute to the search advantage for animals by using images of comparable visual shape and gist statistics across the categories. Participants searched for either animal or man-made object targets on a multiple-item display with fruit/vegetable distractors. We consistently observed faster search performance for animal than man-made object targets. Such advantage for animals was unlikely affected by differences in low- or mid-level visual properties or whether observers were either explicitly told about the specific targets or not explicitly told to search for either animals or man-made objects. Instead, the efficiency in categorizing animals over man-made objects appeared to contribute to the search advantage. We suggest that apart from low- or mid-level visual differences among categories, higher-order processes, such as categorization via interpreting visual inputs and mapping them onto distinct concepts, may be critical in shaping category-selective effects.

Introduction
We maintain stable representations of the world via interactions of incoming sensory input and existing conceptual knowledge about the world. Distinct concepts, such as animate and inanimate entities, often comprise vast and systematic differences in visual features. As the brain transforms percepts into concepts, to what extent do the mental representations that guide our behavior in searching for relevant items in an environment contain visual or conceptual information? While several recent studies have shown that the effects of visual properties may account for category effects such as animacy (e.g., Long, Störmer, & Alveraz, 2017; Zachariou, Del Giacco, Ungerleider, & Yue, 2018; see also Rice, Watson, Hartley, & Andrews, 2014), here we examine whether higher-level processes such as categorization may also influence visual search performance when low- and mid-level visual differences that have been previously shown to sufficiently distinguish among animals and man-made objects are minimized. 
One animate–inanimate distinction in visual perception is that there is an advantage for the processing of animals compared with man-made objects. For instance, observers are faster at detecting animals, compared with man-made objects or other inanimate items such as plants, in complex natural scenes (New, Cosmides, & Tooby, 2007; Wang, Tsuchiya, New, Hurlemann, & Adolphs, 2015). Even when observers are not actively searching for animals, the presence of animals in a scene also appears to capture attention. For instance, change detection for inanimate targets (e.g., a leaf) was slowed when there were animal distractors in the display, compared with when there were no animal distractors (Altman, Khislavsky, Coverdale, & Gilger, 2016). The attentional advantage for animals has also been shown in other tasks examining visual search efficiency (Jackson & Calvillo, 2013; Lipp, Derakshan, Waters, & Logies, 2004; but see Levin, Takarae, Miner, & Keil, 2001), resistance of the inattentional blindness (Calvillo & Hawkins, 2016; Calvillo & Jackson, 2014) and attentional blink (Guerrero & Calvillo, 2016), using behavioral or eye tracking measures (Yang et al., 2012). 
Despite the wealth of evidence of the attentional advantage for animals compared with man-made objects, it remains unclear whether visual or conceptual aspects of the differences between animals and man-made objects may contribute to the advantage. For complex natural scenes, although systematic differences in low-level visual properties, such as power spectrum, luminance, and contrast, have been found between images with or without animals in the scenes (Torralba & Oliva, 2003; Wichmann, Drewes, Karl, & Gegenfurtner, 2010), such differences cannot account for human observers' rapid detection of animals in these images (Wichmann et al., 2010). Nonetheless, independent of any scene details, it is possible that visual features of the items alone are sufficient for human observers to distinguish between animals and man-made objects (e.g., Levin et al., 2001; LoBue, 2014). Specifically, animals often have curvilinear shapes whereas man-made objects, especially tools, tend to be rectilinear or elongated in shape, which affords graspability (Almeida et al., 2014). Curvilinear and rectilinear visual features alone may be sufficient to support categorization between animals and man-made objects even for scrambled, texture-form images with or without recognizable global shape information (Long et al., 2017; Zachariou et al., 2018). Such mid-level visual features facilitate visual search performance if a target (e.g., animal) is from a different category than the distractors (e.g., man-made objects; Long et al., 2017). Animals and man-made objects also differ in visual similarity among category members (Gerlach, Law, & Paulson, 2004), as animals tend to be more visually similar to each other (e.g., many animals have four legs and one head, etc.) than man-made objects (e.g., different tools may have few overlapping visual features; Humphreys, Riddoch, & Quinlan, 1988). Moreover, animals may also be more visually complex compared with man-made objects (e.g., a lion vs. a hammer; Gerlach, 2007; Moore & Price, 1999; Snodgrass & Vanderwart, 1980; Tyler et al., 2003). Taken together, it is possible that vast differences in the visual features between animals and man-made objects may drive the differential performance in visual search. 
If the advantage for animals in visual search remains with images of comparable visual features between the categories, it would suggest that higher-level processes such as categorization, which involves semantic processing or extracting meanings from any available visual features, may play an additional role. Here we tested this hypothesis by minimizing visual differences among the categories. Our findings would extend from previous studies that found search advantage for animals to further understand the nature of representations among animals and man-made objects that are critical for performance. 
The current study used two approaches in attempt to minimize visual differences between animals and man-made objects. First, we selected images that were either round or elongated in the outline shapes. Second, we measured gist statistics of the images and used only images with comparable gist statistics between the categories. Gist statistics describe the holistic shape and structure of an image by sampling spatial frequency and orientation information of image segments (Oliva & Torralba, 2001, 2006), which capture more complete and sophisticated visual shape properties compared with other low-level visual measures such as luminance, contrast, or pixel-wise measures. Gist statistics also provide information about the amount of visual details in the images (Oliva & Torralba, 2006). More importantly, gist statistics have been shown to support behavioral performance in visual categorization, and to predict neural responses in the occipitotemporal cortex for various visual object and scene categories (e.g., Andrews, Watson, Rice, & Hartley, 2015; Bar, 2003; Loschky & Larson, 2010; Oliva & Torralba, 2001, 2007; Rice et al., 2014; Watson, Hartley, & Andrews, 2014). Here, we used images of comparable visual shape and gist statistics across different categories to examine whether the search advantage for animals would remain. 
In Experiments 1A, 1B, and 2A, observers were asked to search for either an animal or a man-made object target on a multiple-item display with varied set sizes, with fruits/vegetables as additional fillers. In Experiment 2B, observers were told to search for any non-fruits/vegetable items, which could either be animals or man-made objects. Note that in addition to comparable visual shape and gist statistics in the images we used across the three categories, fruits/vegetables were used as additional fillers as previous studies showed comparable semantic distance of fruits/vegetables to either animals or man-made objects (Bracci & Op de Beeck, 2016; Carota, Kriegeskorte, Nili, & Pulvermüller, 2017). 
Our main focus was the effect of animacy on the search for a target, in other words, whether the search for animals is faster than the search for man-made objects among items with comparable visual shape and gist statistics. Apart from the main focus of the effect of target category, in Experiments 1A and 1B we also examined whether task-irrelevant animals or man-made objects may attract attention away from the target, revealing a stimulus-driven effect (Langton, Law, Burton, & Schweinberger, 2008; Theeuwes, 1991, 1992; Yantis, 1993). Therefore, in half of the trials, an item from the non-target category (e.g., a man-made object when searching for an animal) also appeared as a distractor among the fruit/vegetable distractors. Additionally, while we do not expect animacy to be a basic feature that guides visual search (Wolfe & Horowitz, 2004; but see Levin et al., 2001), as indicated by a pop-out effect for animal search with the search slope less than 10 ms/item with respect to the set size increase (Duncan & Humphreys, 1989; Theeuwes, 1993; Treisman & Gelade, 1980; but see Buetti, Cronin, Madison, Wang, & Lleras, 2016), we examined the search performance for animal and man-made object targets across set sizes of 3, 6, and 9 in all experiments. 
To anticipate the results, we found in Experiment 1A that the search was faster for animal than man-made object targets across set sizes. In Experiment 1B, we replicated the results in Experiment 1A while further ruling out the influences of low-level visual factors including luminance, contrast, and power spectrum, as these factors could affect selective attention (Moraglia, 1989; Parkhurst, Law, & Niebur, 2002; Sagi, 1988; Smith, 1962; Theeuwes, 1995; Wolfe & Horowitz, 2004, 2017). Experiments 2A and 2B further demonstrated that categorization is more efficient for animals than man-made objects, either when participants were shown the names of all target items prior to the search task to minimize the range of possible targets for both categories, or when participants were asked to detect any items that were not fruits/vegetables. 
Experiments 1A and 1B
Method
Participants
Fifty-six undergraduate students aged between 18 and 23 of years (mean = 19.8, SD = 1.3, 34 women and 22 men) at New York University Abu Dhabi participated for either course credits or subsistence allowances; 28 participants completed Experiment 1A and 28 participants completed Experiment 1B. All participants had normal or corrected-to-normal vision. All experiments were approved by New York University Abu Dhabi Institutional Review Board. 
Stimuli
Figure 1A illustrates examples of the stimuli in Experiment 1A. A total of 52 items including 12 animals, 12 man-made objects, and 28 fruits/vegetables were used. Each item had 16 exemplars. All images were in grayscale. An exemplar of each item is shown in the Appendix, Figure A1. The animals and man-made objects were either target or distractor categories for each of the two participant groups. The fruits/vegetables were filler distractors. Half of the items from the three categories had round shapes (e.g., turtle, steering wheel, apple), while the other half had elongated shapes (e.g., wall lizard, butter knife, cucumber). All items had neutral valence, so any potential emotional effect on visual search should be minimized (e.g., LoBue, 2014; Ohman, Flykt, & Esteves, 2001). 
Figure 1
 
Sample images of animals, man-made objects, and fruits/vegetables used in Experiments 1A (A) and 1B (B). In all three categories, the overall shape of half of the items was elongated, whereas the overall shape of the remaining items was round.
Figure 1
 
Sample images of animals, man-made objects, and fruits/vegetables used in Experiments 1A (A) and 1B (B). In all three categories, the overall shape of half of the items was elongated, whereas the overall shape of the remaining items was round.
Gist statistics were measured on spatial frequency and orientation information of all segments for each image (Oliva & Torralba, 2001). Specifically, a series of Gabor filters across eight orientations and four spatial frequencies were applied to each image to generate 32 filtered images. Each filtered image was then segmented into a 4 × 4 grid within which the values were averaged per cell, resulting in 16 values. The final gist statistics for each image were made up by a vector of 512 (32 × 16) values (e.g., Oliva & Torralba, 2001; Rice et al., 2014). To compare gist statistics across shapes and categories, the values were first averaged across all exemplars for each item. Dissimilarity indicated by squared Euclidean distance between gist statistics of each pair of items both within and across shapes and categories were then calculated (Figure 2). We compared the dissimilarity between round versus elongated shapes separately for each category, and between each pair of categories separately for each shape. We found that the gist statistics were significantly different between elongated and round shapes in each category, but they were comparable across categories for either shape. Within-shape dissimilarity (e.g., a turtle and a squirrel) was significantly lower than cross-shape dissimilarity (e.g., a turtle and a wall lizard) for all categories: animals, t(64) = −5.9, p < 0.0001; man-made objects, t(64) = −12.7, p < 0.0001; fruits/vegetables, t(376) = −11.8, p < 0.0001. Conversely, there was no statistical difference between within-category dissimilarity (e.g., a turtle and a squirrel) and cross-category dissimilarity (e.g., a turtle and a steering wheel) for animals versus man-made objects: elongated, t(64) = −0.2, p = 0.82; round, t(64) = −0.02, p = 0.99, for animals versus fruits/vegetables: elongated, t(188) = 0.4, p = 0.71; round, t(188) = 0.6, p = 0.52, and for man-made objects versus fruits/vegetables: elongated, t(188) = −0.6, p = 0.58; round, t(188) = −0.6, p = 0.58. These results indicated that for either elongated or round shape, there was no systematic difference across different categories in visual properties quantified by gist statistics. 
Figure 2
 
Pair-wise dissimilarity (squared Euclidean distance) of gist statistics showed significant differences between elongated and round shapes in each category, but no significant differences across animals, man-made objects, and fruits/vegetables for either shape.
Figure 2
 
Pair-wise dissimilarity (squared Euclidean distance) of gist statistics showed significant differences between elongated and round shapes in each category, but no significant differences across animals, man-made objects, and fruits/vegetables for either shape.
Low-level visual features, such as luminance, contrast and power spectrum could affect selective attention (Moraglia, 1989; Parkhurst et al., 2002; Sagi, 1988; Smith, 1962; Theeuwes, 1995; Wolfe & Horowitz, 2004). For the images used in Experiment 1A, the contrast and power spectrum were statistically comparable between animals and man-made objects (ps > 0.29); however, the comparisons between animals and fruits/vegetables, and between man-made objects and fruits/vegetables, were significantly different (ps < 0.01). In Experiment 1B, all images were further processed with the SHINE toolbox (Willenbockel et al., 2010; see Figure 1B) to balance low-level visual properties including mean luminance, contrast, and power spectrum (averaged across orientations at each spatial frequency) across images of all categories. 
Procedure
The experiments were run on MATLAB (MathWorks, Natick, MA) with Psychtoolbox (Brainard, 1997; Kleiner et al., 2007). On each trial, a fixation was presented at the center of the screen for 500 ms, followed by a display of 3, 6, or 9 items presented around an invisible circle (Figure 3). Participants were asked to determine whether a target was present or absent by pressing either of two keys on a keyboard as accurately and as fast as possible. The importance of accuracy was more emphasized than response speed. The stimulus display was shown until a response was made. The distance from the center of each item to the center of the screen was 5.5°, and the visual angle of each item was 3.6° × 3.6°. A chinrest was used to ensure that the visual distance was fixed to be 57 cm throughout the study. 
Figure 3
 
Schematic sample displays with a set size of 6 for the target present–distractor absent, target present–distractor present, target absent–distractor absent, and target absent–distractor present conditions for Experiment 1A. Half of the participants were asked to search for animals, the other half were asked to search for man-made objects.
Figure 3
 
Schematic sample displays with a set size of 6 for the target present–distractor absent, target present–distractor present, target absent–distractor absent, and target absent–distractor present conditions for Experiment 1A. Half of the participants were asked to search for animals, the other half were asked to search for man-made objects.
Participants were randomly assigned to the animal or man-made object search group. All participants were explicitly instructed to search for any items from the category of interest (e.g., animals for the animal search group), but there was no mention about the identities of other two categories (e.g., man-made objects and fruits/vegetables for the animal search group). In this way, we were able to examine the effect of attentional capture from either of animal or man-made object distractors, which appeared in half of the trials, without explicitly reminding participants that three distinct categories of items would be shown in the study. 
For each target-present trial, one item from the target category was presented (e.g., a squirrel for the animal search group). For each distractor-present trial, one item from the distractor category (e.g., a kitchen knife for the animal search group) was presented. The rest of the items on the displays were fruits/vegetables. Half of the trials showed image arrays of round stimuli, and the other half presented image arrays of elongated stimuli. There were 864 trials in total, with 72 trials in each Target-Present × Distractor-Present × Set Size condition. 
The selection of items on each display and the trial presentation order were randomized across participants, and the randomization was matched between groups in each experiment so that participants searching for either animal or man-made object targets would see the same displays. The amount of times that the targets appeared on each of the three, six, or nine possible locations on the displays were evenly distributed, while the locations for the appearance of other items were randomized. Because each animal or man-made object target and distractor exemplar was presented only once for each set size during the experiment, to minimize the possibility that the rarity of either animals or tools as distractors compared with fruits/vegetables might account for any distractor effects, a subset of images of fruits/vegetables (6 out of 14 items from each of the elongated or round sets) were randomly selected to appear for the same number of times as the animal/tool distractors. For these items, a total of 12 out of the 16 exemplars were randomly selected to maintain identical presentation frequencies as the animal or man-made object distractors for each participant, whereas all 16 exemplars were used for all remaining fruit/vegetable items. 
Results
The mean correct response time (RT) for the target present trials is illustrated in Figure 4, and sensitivity (d′) for all trials is shown in Table 1. Correct RT was the main measure here as it reflected the performance for target present trials, whereas d′ demonstrated the performance for both target-present and target-absent trials. For each experiment, trial outliers with RT below 150 ms or above three standard deviations of the average of the individual RT were excluded from the analyses (average 2.1% of the trials). Two three-way ANOVAs were conducted on correct RT for target present trials and d′ for all trials, with a between-subjects factor target category (animals vs. man-made objects), and two within-subject factors, distractor presence (present vs. absent) and set size (three vs. six vs. nine). 
Figure 4
 
Mean correct response times on target present trials averaged across items, as a function of target category, distractor presence, and set size for Experiments 1A and 1B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction between distractor presence and set size. Mean correct response times on target-present trials for individual items, averaged across distractor-present and distractor-absent trials, are also shown.
Figure 4
 
Mean correct response times on target present trials averaged across items, as a function of target category, distractor presence, and set size for Experiments 1A and 1B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction between distractor presence and set size. Mean correct response times on target-present trials for individual items, averaged across distractor-present and distractor-absent trials, are also shown.
Table 1
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category, distractor presence, and set size for Experiments 1A and 1B.
Table 1
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category, distractor presence, and set size for Experiments 1A and 1B.
Experiment 1A
For correct RT on target present trials, the main effect of Target category was significant, F(1, 26) = 9.0, p = 0.006, Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(\eta _p^2\) = 0.26, with animal search faster than man-made object search, indicating an overall advantage for animal search. The significant main effect of distractor presence, F(1, 26) = 6.0, p = 0.021, Display Formula\(\eta _p^2\) = 0.19, revealed longer RT when the distractors were present relative to when they were absent. The significant main effect of set size, F(2, 52) = 138.4, p < 0.0001, Display Formula\(\eta _p^2\) = 0.84, showed that RT increased with increased set sizes (Bonferroni-corrected pairwise comparisons: ps < 0.0001). None of the interactions was significant (Target Category × Distractor Presence: F(1, 26) = 1.8, p = 0.19, Display Formula\(\eta _p^2\) = 0.07; Target Category × Set Size: F(2, 52) = 0.4, p = 0.70, Display Formula\(\eta _p^2\) = 0.01; Distractor Presence × Set Size: F(2, 52) = 1.3, p = 0.28, Display Formula\(\eta _p^2\) = 0.05; three-way interaction: F(2, 52) = 0.2, p = 0.82, Display Formula\(\eta _p^2\) = 0.01). The search slopes were comparable between animal search (23.8 ms/item and 20.6 ms/item with and without the presence of man-made object distractors), and for man-made object search (25.4 ms/item and 23.7 ms/item with or without the presence of animal distractors). 
To examine the overall sensitivity performance on the task, we also calculated d′ on all trials. Only the main effect of set size was significant, F(2, 52) = 8.2, p = 0.001, Display Formula\(\eta _p^2\) = 0.24, with lower d′ for Set Size 9 compared with Set Sizes 3 and 6 (Bonferroni-corrected ps < 0.01), and no statistical difference between set sizes 3 and 6 (p = 0.99). All other main effects or interactions were not significant (target category: F(1, 26) = 0.01, p = 0.92, Display Formula\(\eta _p^2\) < 0.0001; distractor presence: F(1, 26) = 0.05, p = 0.83, Display Formula\(\eta _p^2\) = 0.002; Target Category × Distractor Presence: F(1, 26) = 3.1, p = 0.092, Display Formula\(\eta _p^2\) = 0.11; Target Category × Set Size: F(2, 52) = 2.1, p = 0.13, Display Formula\(\eta _p^2\) = 0.08; Distractor Presence × Set Size: F(2, 52) = 0.5, p = 0.60, Display Formula\(\eta _p^2\) = 0.02; three-way interaction: F(2, 52) = 0.2, p = 0.79, Display Formula\(\eta _p^2\) = 0.01). 
Experiment 1B
Replicating Experiment 1A, for correct RT on target present trials, the main effect of target category was significant, F(1, 26) = 8.3, p = 0.008, Display Formula\(\eta _p^2\) = 0.24, with animal search faster than man-made object search. The main effect of set size was also significant, F(1.43, 37.07) = 91.4, p < 0.0001, Display Formula\(\eta _p^2\) = 0.78, showing longer search time with increased set sizes (Bonferroni-corrected ps < 0.0001). The Target Category × Set Size interaction approached significance, F(2, 52) = 2.6, p = 0.08, Display Formula\(\eta _p^2\) = 0.09, but all other main effects or interactions were not significant (distractor presence: F(1, 26) = 0.5, p = 0.48, Display Formula\(\eta _p^2\) = 0.02; Target Category × Distractor Presence: F(1, 26) = 0.5, p = 0.48, Display Formula\(\eta _p^2\) = 0.02; Distractor Presence × Set Size: F(2, 52) = 0.1, p = 0.92, Display Formula\(\eta _p^2\) = 0.003; three-way interaction: F(2, 52) = 2.0, p = 0.15, Display Formula\(\eta _p^2\) = 0.07). The search slopes for animal targets were 24.0 ms/item with man-made object distractors, and 24.1 ms/item without such distractors. The search slopes for man-made object targets were 35.0 ms/item with the presence of animal distractors, and 32.8 ms/item without animal distractors. 
For d′, only the main effect of set size was significant, F(2, 52) = 6.7, p = 0.002, Display Formula\(\eta _p^2\) = 0.21, with significantly lower d′ for Set Size 9 compared with Set Size 3 (Bonferroni-corrected p < 0.01), and no statistical difference between other set sizes (ps > 0.10). All other main effects or interactions were not significant: target category, F(1, 26) = 0.7, p = 0.41, Display Formula\(\eta _p^2\) = 0.03; distractor presence, F(1, 26) = 2.0, p = 0.17, Display Formula\(\eta _p^2\) = 0.07; Target Category × Distractor Presence, F(1, 26) = 0.2, p = 0.65, Display Formula\(\eta _p^2\) = 0.01; Target Category × Set Size, F(2, 52) = 0.1, p = 0.87, Display Formula\(\eta _p^2\) = 0.01; Distractor Presence × Set Size, F(2, 52) = 2.8, p = 0.07, Display Formula\(\eta _p^2\) = 0.10; three-way interaction: F(2, 52) = 0.2, p = 0.84, Display Formula\(\eta _p^2\) = 0.01. 
Discussion
With images of comparable visual shape and gist statistics, we found in both Experiments 1A and 1B that the search for animal targets remained significantly faster than that for man-made object targets across set sizes, suggesting that the search advantage for animals is not merely driven by visual shape and gist statistics. Moreover, Experiment 1B further showed that the search advantage for animals cannot be accounted for by low-level visual features. 
The presence of distractors appeared to attract attention from searching for the targets in Experiment 1A, suggesting that the animal or tool distractors might be processed differently from the fruit/vegetable distractors. However, this effect was not found in Experiment 1B when the low-level visual properties were equated among the categories. It is possible that the target search in Experiment 1A might have been slowed by the presence of an animal or man-made object distractor because of the low-level visual differences between animals or man-made object distractors, compared with fruit/vegetable distractors. Alternatively, the effect of actively searching for animal versus man-made object targets, which was observed in both Experiments 1A and 1B, appeared to be highly robust and independent of low-level visual differences, compared with the stimulus-driven distractor effect. 
While it is unlikely that the faster search performance for animal than man-made object targets is solely due to low- and mid-level visual differences among the categories, one possible account for such difference in performance is that the categorization process based on the visual features beyond low- and mid-level differences is more efficient for animals than man-made objects. In Experiments 2A and 2B, we aimed to extend the findings to conditions when participants were either explicitly told the names of all target items prior to the search task to minimize the range of possible targets, or when they were only asked to detect any items that were not fruits/vegetables and were not explicitly asked to search for either animal or man-made object targets. Furthermore, while different groups of participants searched for either animal or man-made object targets in Experiments 1A and 1B, Experiments 2A and 2B further tested whether search performance would remain faster for animals than man-made objects by manipulating target category as a within-subjects factor. 
Experiments 2A and 2B
Method
Participants
Forty undergraduate students aged between 18 and 23 of years (mean = 19.8, SD = 1.1, 23 women and 17 men) from the same participant pool as in Experiments 1A and 1B took part in this study; 20 participants completed Experiment 2A and 20 participants completed Experiment 2B
Stimuli and procedure
The stimuli and procedure of Experiments 2A and 2B were identical to those in Experiment 1B, except for the following changes: The distractor-present trials were removed. In Experiment 2A, each participant searched for both animal and man-made object targets in separate blocks, with the task order counter-balanced across participants (i.e., half of the participants searched for animal targets first, and the other half searched for man-made object targets first). Participants studied a list of target names (12 animals or 12 man-made objects) for one minute prior to each task. In Experiment 2B, participants were asked to detect any non-fruit/vegetable item on each trial, and there was no mention of animals or man-made objects. The presentation order of animal target trials and man-made object target trials was randomized. 
Results
The mean correct response time (RT) for the target present trials is illustrated in Figure 5, and sensitivity (d′) for all trials is shown in Table 2. As in Experiment 1A and 1B, trial outliers with RT below 150 ms or above three standard deviations of the average of the individual RT for each participant were excluded from the analyses (average 2.2% of the trials). Two two-way ANOVAs were conducted on correct RT for target present trials and d′ for all trials, with two within-subject factors target category (animals vs. man-made objects) and set size (three vs. six vs. nine). 
Figure 5
 
Mean correct response times on target-present trials averaged across items, as a function of target category and set size for Experiments 2A and 2B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction of target category and set size. Mean correct response times on target-present trials for individual items are also shown.
Figure 5
 
Mean correct response times on target-present trials averaged across items, as a function of target category and set size for Experiments 2A and 2B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction of target category and set size. Mean correct response times on target-present trials for individual items are also shown.
Table 2
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category and set size for Experiments 2A and 2B.
Table 2
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category and set size for Experiments 2A and 2B.
Experiment 2A
For correct RT on target present trials, the main effect of target category was significant, F(1, 19) = 8.0, p = 0.011, Display Formula\(\eta _p^2\) = 0.30, with animal search faster than man-made object search. The main effect of set size was also significant, F(2, 38) = 117.5, p < 0.0001, Display Formula\(\eta _p^2\) = 0.86, showing slower search with increased set sizes (Bonferroni-corrected ps < 0.0001). The Target Category × Set Size interaction was not significant, F(2, 38) = 1.1, p = 0.35, Display Formula\(\eta _p^2\) = 0.05, with 33.1 ms/item for the search slope for animals and 33.3 ms/item for man-made objects. 
For d′, the main effect of set size was significant, F(2, 38) = 4.1, p = 0.025, Display Formula\(\eta _p^2\) = 0.18, with significantly lower d′ for Set Size 9 compared with Set Size 3 (Bonferroni-corrected p < 0.05), and no statistical difference between other set sizes (ps > 0.10). There was no significant main effect of target category, F(1, 19) = 2.3, p = 0.15, Display Formula\(\eta _p^2\) = 0.11, or interaction, F(2, 38) = 0.5, p = 0.59, Display Formula\(\eta _p^2\) = 0.03. 
Experiment 2B
For correct RT on target present trials, the main effect of target category was significant, F(1, 19) = 9.5, p = 0.006, Display Formula\(\eta _p^2\) = 0.33, with animal search faster than man-made object search. The main effect of set size was also significant, F(2, 38) = 51.5, p < 0.0001, Display Formula\(\eta _p^2\) = 0.73, showing longer search time with increased set sizes (Bonferroni-corrected ps < 0.0001). The Target Category × Set Size interaction was not significant, F(2, 38) = 0.5, p = 0.60, Display Formula\(\eta _p^2\) = 0.03, revealing no statistical difference in the search slopes between animals (34.3 ms/item) and man-made objects (40.1 ms/item). 
For d′, the main effect of target category was significant, F(1, 19) = 22.7, p = 0.0001, Display Formula\(\eta _p^2\) = 0.54, with the sensitivity for animal search higher than man-made object search. The main effect of set size was also significant, F(2, 38) = 14.8, p < 0.0001, Display Formula\(\eta _p^2\) = 0.44, with significantly higher d′ for Set Size 3 compared with both Set Size 6 and Set Size 9 (Bonferroni-corrected ps < 0.01), and no statistical difference between Set Size 6 and Set Size 9 (p = 0.33). There was no significant interaction, F(2, 38) = 0.1, p = 0.90, Display Formula\(\eta _p^2\) = 0.01). 
Discussion
Experiments 2A and 2B replicated the results of both Experiments 1A and 1B by showing that searching for animal targets is faster than searching for man-made objects, even when visual shape, gist statistics, and other low-level visual features across categories were comparable. Experiments 2A and 2B further demonstrated that this category effect was consistently observed in various conditions, such as when explicit prior knowledge was given about the specific target items for the animal or man-made object categories, or when there was no explicit mention regarding animal or man-made object categories. 
General discussion
Across four experiments that used images of comparable visual shape and gist statistics across categories, our findings extended from previous research that used a variety of animal and man-made object images with naturally varied visual features within and across categories (e.g., Jackson & Calvillo, 2013; Levin et al., 2001; Long et al., 2017) to show that the nature of representations that guide visual search performance may not solely reflect low- or mid-level visual influences. Specifically, while visual differences such as curvilinearity and gist statistics clearly have an important role to support visual categorization (Long et al., 2017; Loschky & Larson, 2010; Rice et al., 2014; Zachariou et al., 2018), our findings suggest that these visual information, in addition to other low-level visual features such as luminance, contrast, and power spectrum, cannot entirely account for the faster search performance for animals than man-made objects. Although it is likely impossible to rule out all visual differences between animals and man-made objects, our stimulus set was designed to rule out a large set of possible visual features including low-level features, curvilinearity, elongation, and gist statistics; the comparable gist statistics also suggested similar levels of visual details in the images across the categories. Thus, the present work highly constrains the scope of potential explanations as to the visual differences between categories. Instead, higher-level cognitive processing, such as the process of categorization, which involves the interpretation or semantic processing of the visual features of animals or man-made objects, may contribute to the category differences and facilitate visual search performance for animals over man-made objects. These possibilities are elaborated below. 
It is possible that correct categorization of the animals and man-made objects reflects the different interpretations of the visual input. Even though the visual shape and gist statistics are comparable between categories in this study, there are other available visual features that observers may utilize to interpret the meanings of the visual input for categorization. Previous studies have shown that the interpretation of ambiguous visual stimuli can shape neural representations in the occipitotemporal cortex. For instance, with the absence of actual facial features, face-selective activations in the fusiform “face” area (FFA) were observed when the non-face images were interpreted as faces (Cox, Meyers, & Sinha, 2004; Hadjikhan, Kveraga, Naik, & Ahlfors, 2009; Summerfield, Egner, Mangels, & Hirsch, 2006). Nonetheless, successful categorization of stimuli often relies on correct interpretations of diagnostic information from the visual input. What kinds of diagnostic information may lead us to interpret an image as an animal or a man-made object? One possibility is that the binding or configuration of certain features that relies on prior knowledge about the categories may play a critical role. For instance, while detecting curvilinear features alone may likely bias observers to interpret a visual input as an animal (Long et al., 2017; Zachariou et al., 2018), various man-made objects such as several of those used in the current study may also have curvilinear features. In addition to curvilinear features, detecting an ensemble of a head, a body, and four limbs may further facilitate the categorization (Delorme, Richard, & Fabre-Thorpe, 2010). 
It is also important to note that the current study used a relatively diverse set of animals and man-made objects (see Appendix); thus, the results were unlikely due to particular characteristics of a small subset of animals or objects, but were more likely due to general knowledge about animals and man-made objects (Diesendruck & Gelman, 1999; Humphreys et al., 1988; Laws & Neve, 1999; Patterson, Nestor, & Rogers, 2007; Taylor, Devereux, Acres, Randall, & Tyler, 2012). Specifically, the items used for each category in the experiments were highly different from each other in terms of diagnostic visual features (e.g., a squirrel vs. a dolphin). Moreover, the exemplars for each item also varied in visual appearance. To successfully categorize the target items as either animals or man-made objects, participants would need to utilize information beyond any visual features that were specific to only a few items or exemplars of a category. Therefore, the use of our stimulus set provided further support that our findings are likely related to general knowledge about possible visual features of the categories. 
Although the current study was not designed to examine the effects of individual items, the results showed a range of item differences (Figures 4 and 5). For animals, a continuum of animacy representation ranged from mammals to insects have been observed from lateral to medial occipitotemporal cortex (Connolly et al., 2012; Sha et al., 2015). However, it appears unlikely that the search performance among different animals was simply explained by the animacy continuum, as a follow-up observation revealed that the mammals (e.g., squirrels, mice, dolphins) were not necessarily among the fastest search, and the insects (e.g., snails, locusts) were also not necessarily among the slowest search. Similarly, as man-made objects also differ in many aspects such as manipulability, size, portability, and context (Bar, 2004; Mullally & Maguire, 2011; Peelen & Caramazza, 2012), it remains unclear which aspects determine the search performance for these items. Future work should examine how different visual and conceptual features of individual items contribute to the category effects. Our work provides a method to address these questions. 
While the search advantage for animal targets was robust and was replicated across four experiments, the extent to which the animal or man-made object distractors captured attention was less clear, as the distractor effect was only observed in Experiment 1A, but it was diminished in Experiment 1B. We speculated that the animal or man-made object distractors might have captured attention because of the different image contrasts or power spectrum compared with neighboring fruits/vegetables items. The failure to observe reliable distractor effects in Experiment 1B suggests that such effects might primarily be influenced by low-level visual properties, and that different underlying attentional mechanisms may be involved for the active search for a target and the involuntary interference during an active search by a distractor (Langton et al., 2008; Theeuwes, 1991, 1992; Yantis, 1993). 
In contrast with previous studies where visual features were not strictly controlled (Jackson & Calvillo, 2013; Levin et al., 2001), we found relatively steep search slopes for animal and man-made object categories, and that the search efficiency was comparable for the two categories. These results suggest that the differential search performance for the two categories in the current study is unlikely to have occurred during the pre-attentive stage, as often indicated by relatively shallow search slopes (Duncan & Humphreys, 1989; Theeuwes, 1993; Treisman & Gelade, 1980; Wolfe & Horowitz, 2004). Instead, the advantage for animal search observed in this study may arise from a subsequent stage of recognition or categorization. Nonetheless, it is possible that the natural variations in visual differences between animals and man-made objects might have contributed in determining search efficiency for the categories in the previous studies (Jackson & Calvillo, 2013; Levin et al., 2001). Future work should address how the processing of different sources of information, such as different levels of visual and semantic features about animals and man-made objects, may contribute to the differential performance between the categories in visual search. Nonetheless, our work adds to a growing understanding that semantic information may play a role in visual perception (Coren & Enns, 1993; Lupyan, 2015; Lupyan & Ward, 2013). 
To sum up, using images of comparable visual shape and gist statistics across categories, the current study provides convergent evidence of faster search for animals compared with man-made objects. While category-selective effects between animals and man-made objects are likely influenced by the vast visual differences between the categories, as revealed by previous studies (Long et al., 2017; Zachariou et al., 2018), we suggest that a focus purely on visual differences is incomplete. Rather, a full understanding of category-selective effects in visual perception may need to incorporate the contributions of high-level processes, such as categorization processes or semantic influences, on category-selective effects. More broadly, the current finding sheds light on the understanding of the close interactions between perceptual and semantic systems in human cognition. 
Acknowledgments
We thank Emma Wei Chen for helpful discussions, Daryl Fougnie and Garry Kong for comments on the manuscript, and Anna Noer for assistance in data collection. Declarations of interest: none. 
Commercial relationships: none. 
Corresponding author: Olivia S. Cheung. 
Address: Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates. 
References
Almeida, J., Mahon, B. Z., Zapater-Raberov, V., Dziuba, A., Cabaço, T., Marques, J. F., & Caramazza, A. (2014). Grasping with the eyes: The role of elongation in visual recognition of manipulable objects. Cognitive, Affective & Behavioral Neuroscience, 14 (1), 319–335, https://doi.org/10.3758/s13415-013-0208-0.
Altman, M. N., Khislavsky, A. L., Coverdale, M. E., & Gilger, J. W. (2016). Adaptive attention: How preference for animacy impacts change detection. Evolution and Human Behavior, 37 (4), 303–314, https://doi.org/10.1016/j.evolhumbehav.2016.01.006.
Andrews, T. J., Watson, D. M., Rice, G. E., & Hartley, T. (2015). Low-level properties of natural images predict topographic patterns of neural response in the ventral visual pathway visual pathway. Journal of Vision, 15 (7): 3, 1–12, https://doi.org/10.1167/15.7.3. [PubMed] [Article]
Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in visual object recognition. Journal of Cognitive Neuroscience, 15, 600–609.
Bar, M. (2004). Visual objects in context. Nature Review Neuroscience, 5 (8), 617–629.
Bracci, S., & Op de Beeck, H. (2016). Dissociations and associations between shape and category representations in the two visual pathways. Journal of Neuroscience, 36 (2), 432–444, https://doi.org/10.1523/JNEUROSCI.2314-15.2016.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436, https://doi.org/10.1163/156856897X00357.
Buetti, S., Cronin, D. A., Madison, A. M., Wang, Z., & Lleras, A. (2016). Towards a better understanding of parallel visual processing in human vision: Evidence for exhaustive analysis of visual information. Journal of Experimental Psychology: General, 145 (6), 672–707, https://doi.org/10.1037/xge0000163.
Calvillo, D. P., & Hawkins, W. C. (2016). Animate objects are detected more frequently than inanimate objects in inattentional blindness tasks independently of threat. The Journal of General Psychology, 143 (2), 101–115, https://doi.org/10.1080/00221309.2016.1163249.
Calvillo, D. P., & Jackson, R. E. (2014). Animacy, perceptual load, and inattentional blindness. Psychonomic Bulletin & Review, 21 (3), 670–675, https://doi.org/10.3758/s13423-013-0543-8.
Carota, F., Kriegeskorte, N., Nili, H., & Pulvermüller, F. (2017). Representational similarity mapping of distributional semantics in left inferior frontal, middle temporal, and motor cortex. Cerebral Cortex, 27 (1), 294–309, https://doi.org/10.1093/cercor/bhw379.
Connolly, A. C., Guntupalli, J. S., Gors, J., Hanke, M., Halchenko, Y. O., Wu, Y.-C.,… Haxby, J. V. (2012). The representation of biological classes in the human brain. The Journal of Neuroscience, 32 (8), 2608–2618, https://doi.org/10.1523/JNEUROSCI.5547-11.2012.
Coren, S., & Enns, J. T. (1993). Size contrast as a function of conceptual similarity between test and inducers. Perception & Psychophysics, 54 (5), 579–588.
Cox, D., Meyers, E., & Sinha, P. (2004, April 2). Contextually evoked object-specific responses in human visual cortex. Science, 304, 115–117, https://doi.org/10.1126/science.1093110.
Delorme, A., Richard, G., & Fabre-Thorpe, M. (2010). Key visual features for rapid categorization of animals in natural scenes. Frontiers in Psychology, 1 (21), 1–13, https://doi.org/10.3389/fpsyg.2010.00021.
Diesendruck, G., & Gelman, S. A. (1999). Domain differences in absolute judgments of category membership: Evidence for an essentialist account of categorization. Psychonomic Bulletin and Review, 6 (2), 338–346, https://doi.org/10.3758/BF03212339.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96 (3), 433–458, https://doi.org/10.1037/0033-295X.96.3.433.
Gerlach, C. (2007). A review of functional imaging studies on category specificity. Journal of Cognitive Neuroscience, 19 (2), 296–314, https://doi.org/10.1162/jocn.2007.19.2.296.
Gerlach, C., Law, I., & Paulson, O. B. (2004). Structural similarity and category-specificity: A refined account. Neuropsychologia, 42 (11), 1543–1553, https://doi.org/10.1016/j.neuropsychologia.2004.03.004.
Guerrero, G., & Calvillo, D. P. (2016). Animacy increases second target reporting in a rapid serial visual presentation task. Psychonomic Bulletin & Review, 23, 1832–1838, https://doi.org/10.3758/s13423-016-1040-7.
Hadjikhan, N., Kveraga, K., Naik, P., & Ahlfors, S. P. (2009). Early (N170) activation of face-specific cortex by face-like objects. NeuroReport, 20 (4), 403–407, https://doi.org/10.1097/WNR.0b013e328325a8e1.
Humphreys, G. W., Riddoch, M. J., & Quinlan, P. T. (1988). Cascade processes in picture identification. Cognitive Neuropsychology, 5 (1), 67–104, https://doi.org/10.1080/02643298808252927.
Jackson, R. E., & Calvillo, D. P. (2013). Evolutionary relevance facilitates visual information processing. Evolutionary Psychology, 11 (5), 1011–1026.
Kleiner, M., Brainard, D. H., Pelli, D. G., Broussard, C., Wolf, T., & Niehorster, D. (2007). What's new in Psychtoolbox-3? Perception, 36, S14, https://doi.org/10.1068/v070821.
Langton, S. R. H., Law, A. S., Burton, A. M., & Schweinberger, S. R. (2008). Attention capture by faces. Cognition, 107 (1), 330–342, https://doi.org/10.1016/j.cognition.2007.07.012.
Laws, K. R., & Neve, C. (1999). A “normal” category-specific advantage for naming living things. Neuropsychologia, 37 (11), 1263–1269, https://doi.org/10.1016/S0028-3932(99)00018-4.
Levin, D. T., Takarae, Y., Miner, A. G., & Keil, F. (2001). Efficient visual search by category: Specifying the features that mark the difference between artifacts and animals in preattentive vision. Perception & Psychophysics, 63 (4), 676–697, https://doi.org/10.3758/BF03194429.
Lipp, O. V., Derakshan, N., Waters, A. M., & Logies, S. (2004). Snakes and cats in the flower bed: Fast detection is not specific to pictures of fear-relevant animals. Emotion, 4 (3), 233–250, https://doi.org/10.1037/1528-3542.4.3.233.
LoBue, V. (2014). Deconstructing the snake: The relative roles of perception, cognition, and emotion on threat detection. Emotion, 14 (4), 701–711, https://doi.org/10.1037/a0035898.
Long, B., Störmer, V. S., & Alvarez, G. A. (2017). Mid-level perceptual features contain early cues to animacy. Journal of Vision, 17 (6): 20, 1–20, https://doi.org/10.1167/17.6.20. [PubMed] [Article]
Loschky, L. C., & Larson, A. M. (2010). The natural/man-made distinction is made before basic-level distinctions in scene gist processing. Visual Cognition, 18 (4), 513–536, https://doi.org/10.1080/13506280902937606.
Lupyan, G. (2015). Object knowledge changes visual appearance: Semantic effects on color afterimages. Acta Psychologica, 161, 117–130, https://doi.org/10.1016/j.actpsy.2015.08.006.
Lupyan, G., & Ward, E. J. (2013). Language can boost otherwise unseen objects into visual awareness. Proceedings of the National Academy of Sciences, USA, 110 (35), 14196–14201, https://doi.org/10.1073/pnas.1303312110.
Moore, C. J., & Price, C. J. (1999). A functional neuroimaging study of the variables that generate category-specific object processing differences. Brain, 122, 943–962, https://doi.org/10.1093/brain/122.5.943.
Moraglia, G. (1989). Visual search: Spatial frequency and orientation. Perceptual and Motor Skills, 69 (2), 675–689.
Mullally, S. L., & Maguire, E. E. (2011). A new role for the parahippocampal cortex in representing space. The Journal of Neuroscience, 31 (20), 7441–7449, https://doi.org/10.1523/JNEUROSCI.0267-11.2011.
New, J., Cosmides, L., & Tooby, J. (2007). Category-specific attention for animals reflects ancestral priorities, not expertise. Proceedings of the National Academy of Sciences, USA, 104 (42), 16598–16603, https://doi.org/10.1073/pnas.0703913104.
Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experiemntal Psychology: General, 130 (3), 466–478, https://doi.org/10.1037/AXJ96-3445.130.3.466.
Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42 (3), 145–175.
Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155, 23–36, https://doi.org/10.1016/S0079-6123(06)55002-2.
Oliva, A., & Torralba, A. (2007). The role of context in object recognition. Trends in Cognitive Sciences, 11 (12), 520–527, https://doi.org/10.1016/j.tics.2007.09.009.
Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42 (1), 107–123, https://doi.org/10.1016/S0042-6989(01)00250-4.
Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience, 8 (12), 976–987, https://doi.org/10.1038/nrn2277.
Peelen, M. V., & Caramazza, A. (2012). Conceptual object representations in human anterior temporal cortex. The Journal of Neuroscience, 32 (45), 15728–15736, https://doi.org/10.1523/JNEUROSCI.1953-12.2012.
Rice, G. E., Watson, D. M., Hartley, T., & Andrews, T. J. (2014). Low-level image properties of visual objects predict patterns of neural response across category-selective regions of the ventral visual pathway. Journal of Neuroscience, 34 (26), 8837–8844, https://doi.org/10.1523/JNEUROSCI.5265-13.2014.
Sagi, D. (1988). The combination of spatial frequency and orientation is effortlessly perceived. Perception & Psychophysics, 43 (6), 601–603, https://doi.org/10.3758/BF03207749.
Sha, L., Haxby, J. V., Abdi, H., Guntupalli, J. S., Oosterhof, N. N., Halchenko, Y. O., & Connolly, A. C. (2015). The animacy continuum in the human ventral vision pathway. Journal of Cognitive Neuroscience, 27 (4), 665–678, https://doi.org/10.1162/jocn.
Smith, S. L. (1962). Color coding and visual search. Journal of Experimental Psychology, 64 (5), 434–440, https://doi.org/10.1037/h0047634.
Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6 (2), 174–215, https://doi.org/10.1109/ICIP.2001.958943.
Summerfield, C., Egner, T., Mangels, J., & Hirsch, J. (2006). Mistaking a house for a face: Neural correlates of misperception in healthy humans. Cerebral Cortex, 16 (4), 500–508, https://doi.org/10.1093/cercor/bhi129.
Taylor, K. I., Devereux, B. J., Acres, K., Randall, B., & Tyler, L. K. (2012). Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects. Cognition, 122 (3), 363–374, https://doi.org/10.1016/j.cognition.2011.11.001.
Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50 (2), 184–193, https://doi.org/10.3758/BF03212219.
Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51 (6), 599–606, https://doi.org/10.3758/BF03211656.
Theeuwes, J. (1993). Visual selective attention: A theoretical analysis. Acta Psychologica, 83 (2), 93–154, https://doi.org/10.1016/0001-6918(93)90042-P.
Theeuwes, J. (1995). Abrupt luminance change pops out; abrupt color change does not. Perception & Psychophysics, 57 (5), 637–644, https://doi.org/10.3758/BF03213269.
Torralba, A., & Oliva, A. (2003). Statistics of natural image categories. Network: Computation in Neural Systems, 14, 391–412, https://doi.org/10.1088/0954-898X.
Treisman, A. M., & Gelade, G. (1980). A feature-integration of attention. Cognitive Psychology, 12, 97–136, https://doi.org/10.1016/0010-0285(80)90005-5.
Tyler, L. K., Bright, P., Dick, P., Tavares, P., Pilgrim, L., Fletcher, P.,… Moss, H. (2003). Do semantic categories activate distinct cortical regions? Evidence for a distributed neural semantic system. Cognitive Neuropsychology, 20 (3–6), 541–559, https://doi.org/10.1080/02643290244000211.
Wang, S., Tsuchiya, N., New, J., Hurlemann, R., & Adolphs, R. (2015). Preferential attention to animals and people is independent of the amygdala. Social Cognitive and Affective Neuroscience, 10 (3), 371–380, https://doi.org/10.1093/scan/nsu065.
Watson, D. M., Hartley, T., & Andrews, T. J. (2014). Patterns of response to visual scenes are linked to the low-level properties of the image. NeuroImage, 99, 402–410, https://doi.org/10.1016/j.neuroimage.2014.05.045.
Wichmann, F. A., Drewes, J., Karl, P. R., & Gegenfurtner, K. R. (2010). Animal detection in natural scenes: Critical features revisited. Journal of Vision, 10 (4): 6, 1–27, https://doi.org/10.1167/10.4.6. [PubMed] [Article]
Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: The SHINE toolbox. Behavior Research Methods, 42 (3), 671–684, https://doi.org/10.3758/BRM.42.3.671.
Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5 (6), 495–501, https://doi.org/10.1038/nrn1411.
Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1:0058, https://doi.org/10.1038/s41562-017-0058.
Yang, J., Wang, A., Yan, M., Zhu, Z., Chen, C., & Wang, Y. (2012). Distinct processing for pictures of animals and objects: Evidence from eye movements. Emotion, 12 (3), 540–551, https://doi.org/10.1037/a0026848.
Yantis, S. (1993). Stimulus-driven attentional capture and attentional control settings. Journal of Experimental Psychology-Human Perception and Performance, 19 (3), 676–681, https://doi.org/10.1037//0096-1523.19.3.676.
Zachariou, V., Del Giacco, A. C., Ungerleider, L. G., & Yue, X. (2018). Bottom-up processing of curvilinear visual features is sufficient for animate/inanimate object categorization. Journal of Vision, 18 (12): 3, 1–12, https://doi.org/10.1167/8.12.3. [PubMed] [Article]
Appendix
An exemplar of each of the three categories (animals, man-made objects, fruits/vegetables) is shown in Figure A1
Figure A1
 
An exemplar of each of the three categories: animals, man-made objects, and fruits/vegetables.
Figure A1
 
An exemplar of each of the three categories: animals, man-made objects, and fruits/vegetables.
Figure 1
 
Sample images of animals, man-made objects, and fruits/vegetables used in Experiments 1A (A) and 1B (B). In all three categories, the overall shape of half of the items was elongated, whereas the overall shape of the remaining items was round.
Figure 1
 
Sample images of animals, man-made objects, and fruits/vegetables used in Experiments 1A (A) and 1B (B). In all three categories, the overall shape of half of the items was elongated, whereas the overall shape of the remaining items was round.
Figure 2
 
Pair-wise dissimilarity (squared Euclidean distance) of gist statistics showed significant differences between elongated and round shapes in each category, but no significant differences across animals, man-made objects, and fruits/vegetables for either shape.
Figure 2
 
Pair-wise dissimilarity (squared Euclidean distance) of gist statistics showed significant differences between elongated and round shapes in each category, but no significant differences across animals, man-made objects, and fruits/vegetables for either shape.
Figure 3
 
Schematic sample displays with a set size of 6 for the target present–distractor absent, target present–distractor present, target absent–distractor absent, and target absent–distractor present conditions for Experiment 1A. Half of the participants were asked to search for animals, the other half were asked to search for man-made objects.
Figure 3
 
Schematic sample displays with a set size of 6 for the target present–distractor absent, target present–distractor present, target absent–distractor absent, and target absent–distractor present conditions for Experiment 1A. Half of the participants were asked to search for animals, the other half were asked to search for man-made objects.
Figure 4
 
Mean correct response times on target present trials averaged across items, as a function of target category, distractor presence, and set size for Experiments 1A and 1B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction between distractor presence and set size. Mean correct response times on target-present trials for individual items, averaged across distractor-present and distractor-absent trials, are also shown.
Figure 4
 
Mean correct response times on target present trials averaged across items, as a function of target category, distractor presence, and set size for Experiments 1A and 1B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction between distractor presence and set size. Mean correct response times on target-present trials for individual items, averaged across distractor-present and distractor-absent trials, are also shown.
Figure 5
 
Mean correct response times on target-present trials averaged across items, as a function of target category and set size for Experiments 2A and 2B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction of target category and set size. Mean correct response times on target-present trials for individual items are also shown.
Figure 5
 
Mean correct response times on target-present trials averaged across items, as a function of target category and set size for Experiments 2A and 2B (bold lines). Error bars represent the 95% confidence intervals of the within-subject interaction of target category and set size. Mean correct response times on target-present trials for individual items are also shown.
Figure A1
 
An exemplar of each of the three categories: animals, man-made objects, and fruits/vegetables.
Figure A1
 
An exemplar of each of the three categories: animals, man-made objects, and fruits/vegetables.
Table 1
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category, distractor presence, and set size for Experiments 1A and 1B.
Table 1
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category, distractor presence, and set size for Experiments 1A and 1B.
Table 2
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category and set size for Experiments 2A and 2B.
Table 2
 
Mean sensitivity (d′) and standard deviations (in parentheses) as a function of target category and set size for Experiments 2A and 2B.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×