September 2015
Volume 15, Issue 12
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Animate shape features influence high-level animate categorization
Author Affiliations
  • Abla Alaoui Soce
    Harvard University
  • Bria Long
    Harvard University
  • George Alvarez
    Harvard University
Journal of Vision September 2015, Vol.15, 1159. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Abla Alaoui Soce, Bria Long, George Alvarez; Animate shape features influence high-level animate categorization. Journal of Vision 2015;15(12):1159.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

The distinction between animate and inanimate entities is fundamental at both the cognitive and neural levels (e.g. Mahon and Caramazza, 2009; Konkle & Caramazza, 2013). Previous work suggests that animate versus inanimate entities have consistent perceptual differences that can be extracted at early stages of perceptual processing (Long et al., in prep). Do these visual features feed-forward to activate high-level, conceptual representations? In Experiment 1, we developed a flanker interference task in which recognizable images of animals and objects influenced reaction time on a word categorization task. In Experiment 2, we used the same paradigm with textures that were unrecognizable at the basic-category level, but which preserved statistical features of the original images (by coercing white noise to match the low and mid-level statistics of animal and object images, Freeman & Simoncelli, 2011). On each trial, participants categorized a written word (e.g., ‘FERRET’) as either an ‘animal’ or an ‘object’. Words were presented concurrently with a distractor image that was either congruent (e.g., a picture of a panda) or incongruent (e.g., a picture of a tractor) with the broad category of the word. With recognizable images (Experiment 1), participants were faster at categorizing the words as animal or object when the distractor image belonged to the same (versus the different) category (Congruent=735.33ms, Incongruent=762.14ms, F(1,15)=41.539, p< 0.001). With texture images (Experiment 2), participants were again significantly faster at categorizing words when the textures were from images that originally belonged to the same broad category (Congruent=728.76ms, Incongruent=744.12ms, F(1,15)=22.010, p< 0.001). The textures were unrecognizable at the basic level (average identifiability=3.5%), but with unlimited viewing time they could be classified as animate versus inanimate (d prime=1.01). These results suggest that animate versus inanimate entities differ in perceptual features, and that these features feed-forward to automatically activate the conceptual representations of animate and inanimate entities.

Meeting abstract presented at VSS 2015


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.