In
Experiment 1, when images were controlled for a variety of low-level perceptual features, visual search was more efficient when distractors differed from targets in animacy. These results may suggest that these animates versus inanimates differ in mid-level perceptual features that the visual system can extract for more efficient search. However, these images are also identifiable at the basic level and thus contain rich semantic information. Many researchers have proposed that semantic information can guide attention and eye movements during visual search (Becker, Pashler, & Lubin,
2007; Bonitz & Gordon,
2008; Hwang, Want, & Pomplun,
2011; Loftus & Mackworth,
1978; Underwood, Templeman, Lamming, & Foulsham,
2008; Wu, Wick, Pomplun,
2014; Xu, Jiang, Wang, Kankanhalli, & Zhao,
2014). Thus, although other studies have found evidence against semantic guidance (De Graef, Christiaens, & d'Ydewalle,
1990; Henderson, Weeks, Jr., & Hollingworth,
1999), it is important to address the possibility that on mixed-animacy displays, search was more efficient solely because distractors differed from the target in their semantic category. In
Experiment 2, we asked directly if animates versus inanimates differ in mid-level perceptual features by creating unrecognizable versions of the same stimuli used in
Experiment 1. Specifically, we created
texforms, stimuli that preserve some texture and form information but obscure basic-level identification (Long et al.,
2016). To create these texforms, we coerced white noise to have the same first- and second-order statistics as the original image (Freeman & Simoncelli,
2011).