Purchase this article with an account.
Xiaokun Xu, Manan Shah, Irving Biederman; Coding of Visual Stimuli for Size and Animacy. Journal of Vision 2013;13(9):670. doi: 10.1167/13.9.670.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Subjects performed a verification task in which they judged whether a picture matched a preceding target word. Interest centered on negative trials where the picture could differ from the target in animacy and/or referential size. The task required processing of neither of these variables, but given a target "ELEPHANT," images that matched it in animacy (e.g., HAMSTER) or size (e.g., TANK) produced reliably longer RTs and higher error rates with the increase of a match in both size and animacy (HIPPOPOTAMUS) additive with the separate effects. The additivity is suggestive of the size coding of visual entities being independent of their animacy. Recent fMRI studies have revealed a partial overlap between maps for size (large vs. small) and animacy (animate vs. inanimate) in human occipito-temporal cortex, suggesting the possibility of separate cortical areas for coding size and animacy (Konkle et al, 2012a,b; Mahon et al., 2009, Connolly et al., 2012). A 2 (animal vs. object pictures) x 2 (large vs. small referent) block fMRI design (with an orthogonal task) found animate and inanimate stimuli differentially activated lateral and medial occipital-temporal cortex, respectively. However, separate regions coding for size were not apparent except that large, inanimate objects activated a region partially overlapping with PPA. Our results suggest that an entity’s semantic features for size and animacy are automatically activated, with the size feature associated with each entity and thus distributed throughout the animate/inanimate map. The comparison of sizes, however, may be performed in a common (likely parietal) area, not associated with any category (Dehaene, 2003). Somewhat consistent with this result is Paivio’s (1975) finding that judging the referential size of two images is independent of whether the entities are within vs. between superordinate (animate vs. inanimate) classes, e.g., zebra-dog vs. zebra-lamp.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only