Purchase this article with an account.
Abla Alaoui Soce, Bria Long, George Alvarez; Animate shape features influence high-level animate categorization. Journal of Vision 2015;15(12):1159. doi: 10.1167/15.12.1159.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
The distinction between animate and inanimate entities is fundamental at both the cognitive and neural levels (e.g. Mahon and Caramazza, 2009; Konkle & Caramazza, 2013). Previous work suggests that animate versus inanimate entities have consistent perceptual differences that can be extracted at early stages of perceptual processing (Long et al., in prep). Do these visual features feed-forward to activate high-level, conceptual representations? In Experiment 1, we developed a flanker interference task in which recognizable images of animals and objects influenced reaction time on a word categorization task. In Experiment 2, we used the same paradigm with textures that were unrecognizable at the basic-category level, but which preserved statistical features of the original images (by coercing white noise to match the low and mid-level statistics of animal and object images, Freeman & Simoncelli, 2011). On each trial, participants categorized a written word (e.g., ‘FERRET’) as either an ‘animal’ or an ‘object’. Words were presented concurrently with a distractor image that was either congruent (e.g., a picture of a panda) or incongruent (e.g., a picture of a tractor) with the broad category of the word. With recognizable images (Experiment 1), participants were faster at categorizing the words as animal or object when the distractor image belonged to the same (versus the different) category (Congruent=735.33ms, Incongruent=762.14ms, F(1,15)=41.539, p< 0.001). With texture images (Experiment 2), participants were again significantly faster at categorizing words when the textures were from images that originally belonged to the same broad category (Congruent=728.76ms, Incongruent=744.12ms, F(1,15)=22.010, p< 0.001). The textures were unrecognizable at the basic level (average identifiability=3.5%), but with unlimited viewing time they could be classified as animate versus inanimate (d prime=1.01). These results suggest that animate versus inanimate entities differ in perceptual features, and that these features feed-forward to automatically activate the conceptual representations of animate and inanimate entities.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only