October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Mid-level feature differences support early EEG-decoding of animacy and object size distinctions
Author Affiliations
  • Ruosi Wang
    Harvard University
  • Daniel Janini
    Harvard University
  • Aylin Kallmayer
    Goethe University Frankfurt
  • Talia Konkle
    Harvard University
Journal of Vision October 2020, Vol.20, 738. doi:https://doi.org/10.1167/jov.20.11.738
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ruosi Wang, Daniel Janini, Aylin Kallmayer, Talia Konkle; Mid-level feature differences support early EEG-decoding of animacy and object size distinctions. Journal of Vision 2020;20(11):738. https://doi.org/10.1167/jov.20.11.738.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Human object-selective cortex shows a large-scale organization by the high-level properties of animacy and object size; but, this same neural organization is evoked when viewing “texform” stimuli, which are unrecognizable stimuli that preserve some texture and coarse form information from the original images (Long, Chen & Konkle, 2018). These results suggest the high-level categorical organization is driven largely by differences in mid-level feature tuning—the kind of features that would be detected early in visual processing. However, fMRI studies obscure timing information, and thus it is also possible that the animacy and object-size response differences to texforms were driven more by feedback and/or slower recurrent connections, perhaps reflecting automatic processes that impose higher-level interpretations of what the texforms might be, rather than mid-level feature tuning per se. In order to tease these possibilities apart, we measured neural responses over time using electroencephalography (EEG) and leveraged decoding analyses (n=17). We found successful animacy decoding for original as well as texform images, though to a weaker extent. Critically, this distinguishability between animals and object neural responses peaked at around the same time for texforms and original images (original: 186 ms, texform: 176 ms). Further, a classifier trained to decode animacy from neural responses to texforms could accurately classify neural responses to original images (cross-decoding: 176 ms). These results were also evident for size classification to a weaker degree (original: 156 ms, texform: 151 ms, cross-decoding: 152 ms). Taken together, these results demonstrate that mid-level featural differences underlie much of the neural responses that distinguish animals from objects or big from small things at an early processing stage. This work is in line with the idea that high-level animacy and object size properties in the visual system reflect responses tuned at a mid-level of representation available in an early feedforward pass of visual processing.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.