July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Coding of Visual Stimuli for Size and Animacy
Author Affiliations
  • Xiaokun Xu
    Department of Psychology, University of Southern California
  • Manan Shah
    Neuroscience Program, University of Southern California
  • Irving Biederman
    Department of Psychology, University of Southern California\nNeuroscience Program, University of Southern California
Journal of Vision July 2013, Vol.13, 670. doi:https://doi.org/10.1167/13.9.670
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xiaokun Xu, Manan Shah, Irving Biederman; Coding of Visual Stimuli for Size and Animacy. Journal of Vision 2013;13(9):670. https://doi.org/10.1167/13.9.670.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Subjects performed a verification task in which they judged whether a picture matched a preceding target word. Interest centered on negative trials where the picture could differ from the target in animacy and/or referential size. The task required processing of neither of these variables, but given a target "ELEPHANT," images that matched it in animacy (e.g., HAMSTER) or size (e.g., TANK) produced reliably longer RTs and higher error rates with the increase of a match in both size and animacy (HIPPOPOTAMUS) additive with the separate effects. The additivity is suggestive of the size coding of visual entities being independent of their animacy. Recent fMRI studies have revealed a partial overlap between maps for size (large vs. small) and animacy (animate vs. inanimate) in human occipito-temporal cortex, suggesting the possibility of separate cortical areas for coding size and animacy (Konkle et al, 2012a,b; Mahon et al., 2009, Connolly et al., 2012). A 2 (animal vs. object pictures) x 2 (large vs. small referent) block fMRI design (with an orthogonal task) found animate and inanimate stimuli differentially activated lateral and medial occipital-temporal cortex, respectively. However, separate regions coding for size were not apparent except that large, inanimate objects activated a region partially overlapping with PPA. Our results suggest that an entity’s semantic features for size and animacy are automatically activated, with the size feature associated with each entity and thus distributed throughout the animate/inanimate map. The comparison of sizes, however, may be performed in a common (likely parietal) area, not associated with any category (Dehaene, 2003). Somewhat consistent with this result is Paivio’s (1975) finding that judging the referential size of two images is independent of whether the entities are within vs. between superordinate (animate vs. inanimate) classes, e.g., zebra-dog vs. zebra-lamp.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×